I0814 14:11:39.516987 10 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0814 14:11:39.524434 10 e2e.go:124] Starting e2e run "431c97d7-cdd1-4751-91cc-ae0ea0a5123c" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597414288 - Will randomize all specs Will run 275 of 4992 specs Aug 14 14:11:40.111: INFO: >>> kubeConfig: /root/.kube/config Aug 14 14:11:40.170: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 14 14:11:40.364: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 14 14:11:40.546: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 14 14:11:40.546: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 14 14:11:40.546: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 14 14:11:40.595: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 14 14:11:40.595: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 14 14:11:40.596: INFO: e2e test version: v1.18.5 Aug 14 14:11:40.600: INFO: kube-apiserver version: v1.18.4 Aug 14 14:11:40.602: INFO: >>> kubeConfig: /root/.kube/config Aug 14 14:11:40.625: INFO: Cluster IP family: ipv4 SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:11:40.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency Aug 14 14:11:40.883: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:11:40.886: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6128 I0814 14:11:40.975680 10 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6128, replica count: 1 I0814 14:11:42.030309 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:11:43.032065 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:11:44.033011 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:11:45.033782 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:11:46.034263 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:11:47.034674 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:11:48.035847 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 14 14:11:48.218: INFO: Created: latency-svc-9nttx Aug 14 14:11:48.258: INFO: Got endpoints: latency-svc-9nttx [115.830391ms] Aug 14 14:11:48.403: INFO: Created: latency-svc-rpl6n Aug 14 14:11:48.429: INFO: Got endpoints: latency-svc-rpl6n [170.461673ms] Aug 14 14:11:48.483: INFO: Created: latency-svc-n796k Aug 14 14:11:48.552: INFO: Got endpoints: latency-svc-n796k [292.456018ms] Aug 14 14:11:48.632: INFO: Created: latency-svc-998zm Aug 14 14:11:48.690: INFO: Got endpoints: latency-svc-998zm [431.023239ms] Aug 14 14:11:48.717: INFO: Created: latency-svc-484bw Aug 14 14:11:48.749: INFO: Got endpoints: latency-svc-484bw [489.57435ms] Aug 14 14:11:48.789: INFO: Created: latency-svc-48k95 Aug 14 14:11:49.181: INFO: Got endpoints: latency-svc-48k95 [922.142632ms] Aug 14 14:11:49.216: INFO: Created: latency-svc-kftv9 Aug 14 14:11:49.223: INFO: Got endpoints: latency-svc-kftv9 [963.721258ms] Aug 14 14:11:49.576: INFO: Created: latency-svc-vhcl2 Aug 14 14:11:49.581: INFO: Got endpoints: latency-svc-vhcl2 [1.322494582s] Aug 14 14:11:49.667: INFO: Created: latency-svc-xvt2x Aug 14 14:11:50.151: INFO: Got endpoints: latency-svc-xvt2x [1.88641241s] Aug 14 14:11:50.171: INFO: Created: latency-svc-mqnk5 Aug 14 14:11:50.392: INFO: Got endpoints: latency-svc-mqnk5 [2.127341571s] Aug 14 14:11:50.483: INFO: Created: latency-svc-rcrhn Aug 14 14:11:50.551: INFO: Got endpoints: latency-svc-rcrhn [2.291567016s] Aug 14 14:11:50.596: INFO: Created: latency-svc-th7gh Aug 14 14:11:50.707: INFO: Got endpoints: latency-svc-th7gh [2.447866013s] Aug 14 14:11:50.731: INFO: Created: latency-svc-k978c Aug 14 14:11:50.745: INFO: Got endpoints: latency-svc-k978c [2.48562636s] Aug 14 14:11:50.770: INFO: Created: latency-svc-cdl99 Aug 14 14:11:50.796: INFO: Got endpoints: latency-svc-cdl99 [2.531038405s] Aug 14 14:11:50.857: INFO: Created: latency-svc-tdpb2 Aug 14 14:11:50.866: INFO: Got endpoints: latency-svc-tdpb2 [2.597470211s] Aug 14 14:11:50.896: INFO: Created: latency-svc-jffnw Aug 14 14:11:50.928: INFO: Got endpoints: latency-svc-jffnw [2.668276382s] Aug 14 14:11:51.019: INFO: Created: latency-svc-5h9mx Aug 14 14:11:51.049: INFO: Created: latency-svc-j2mqn Aug 14 14:11:51.050: INFO: Got endpoints: latency-svc-5h9mx [2.619725643s] Aug 14 14:11:51.095: INFO: Got endpoints: latency-svc-j2mqn [2.542889602s] Aug 14 14:11:51.168: INFO: Created: latency-svc-r7wb6 Aug 14 14:11:51.170: INFO: Got endpoints: latency-svc-r7wb6 [2.480321569s] Aug 14 14:11:51.204: INFO: Created: latency-svc-sm858 Aug 14 14:11:51.222: INFO: Got endpoints: latency-svc-sm858 [2.472999589s] Aug 14 14:11:51.245: INFO: Created: latency-svc-t22zz Aug 14 14:11:51.264: INFO: Got endpoints: latency-svc-t22zz [2.082058069s] Aug 14 14:11:51.332: INFO: Created: latency-svc-v9n84 Aug 14 14:11:51.336: INFO: Got endpoints: latency-svc-v9n84 [2.112737403s] Aug 14 14:11:51.405: INFO: Created: latency-svc-zlnjz Aug 14 14:11:51.486: INFO: Got endpoints: latency-svc-zlnjz [1.904576127s] Aug 14 14:11:51.498: INFO: Created: latency-svc-w7w77 Aug 14 14:11:51.516: INFO: Got endpoints: latency-svc-w7w77 [1.365152328s] Aug 14 14:11:51.557: INFO: Created: latency-svc-zqdzl Aug 14 14:11:51.647: INFO: Got endpoints: latency-svc-zqdzl [1.255205467s] Aug 14 14:11:51.701: INFO: Created: latency-svc-r8csj Aug 14 14:11:51.722: INFO: Got endpoints: latency-svc-r8csj [1.171418524s] Aug 14 14:11:51.829: INFO: Created: latency-svc-hr9wz Aug 14 14:11:51.858: INFO: Got endpoints: latency-svc-hr9wz [1.150842411s] Aug 14 14:11:51.941: INFO: Created: latency-svc-vqdb5 Aug 14 14:11:51.990: INFO: Created: latency-svc-lpp6j Aug 14 14:11:51.990: INFO: Got endpoints: latency-svc-vqdb5 [1.245208244s] Aug 14 14:11:52.004: INFO: Got endpoints: latency-svc-lpp6j [1.208175827s] Aug 14 14:11:52.025: INFO: Created: latency-svc-9ngwg Aug 14 14:11:52.091: INFO: Got endpoints: latency-svc-9ngwg [1.224974775s] Aug 14 14:11:52.110: INFO: Created: latency-svc-wl7bz Aug 14 14:11:52.125: INFO: Got endpoints: latency-svc-wl7bz [1.19687067s] Aug 14 14:11:52.146: INFO: Created: latency-svc-d7vj5 Aug 14 14:11:52.168: INFO: Got endpoints: latency-svc-d7vj5 [1.118106218s] Aug 14 14:11:52.253: INFO: Created: latency-svc-6sgns Aug 14 14:11:52.284: INFO: Got endpoints: latency-svc-6sgns [1.188888966s] Aug 14 14:11:52.285: INFO: Created: latency-svc-qdbsd Aug 14 14:11:52.402: INFO: Got endpoints: latency-svc-qdbsd [1.231265883s] Aug 14 14:11:52.452: INFO: Created: latency-svc-6thr7 Aug 14 14:11:52.570: INFO: Got endpoints: latency-svc-6thr7 [1.347964816s] Aug 14 14:11:52.630: INFO: Created: latency-svc-htkjj Aug 14 14:11:52.719: INFO: Got endpoints: latency-svc-htkjj [1.45472434s] Aug 14 14:11:52.858: INFO: Created: latency-svc-tfsnr Aug 14 14:11:52.866: INFO: Got endpoints: latency-svc-tfsnr [1.53075431s] Aug 14 14:11:52.939: INFO: Created: latency-svc-sf9fk Aug 14 14:11:53.007: INFO: Got endpoints: latency-svc-sf9fk [1.520852791s] Aug 14 14:11:53.070: INFO: Created: latency-svc-qmshp Aug 14 14:11:53.101: INFO: Got endpoints: latency-svc-qmshp [1.58450689s] Aug 14 14:11:53.160: INFO: Created: latency-svc-rkvvf Aug 14 14:11:53.210: INFO: Got endpoints: latency-svc-rkvvf [1.562861603s] Aug 14 14:11:53.325: INFO: Created: latency-svc-wwtfh Aug 14 14:11:53.341: INFO: Got endpoints: latency-svc-wwtfh [1.618357297s] Aug 14 14:11:53.378: INFO: Created: latency-svc-wfqxl Aug 14 14:11:53.414: INFO: Got endpoints: latency-svc-wfqxl [1.555966108s] Aug 14 14:11:53.491: INFO: Created: latency-svc-drrr7 Aug 14 14:11:53.534: INFO: Got endpoints: latency-svc-drrr7 [1.543957568s] Aug 14 14:11:53.623: INFO: Created: latency-svc-d84rj Aug 14 14:11:53.630: INFO: Got endpoints: latency-svc-d84rj [1.625308296s] Aug 14 14:11:53.697: INFO: Created: latency-svc-c4t8p Aug 14 14:11:53.767: INFO: Got endpoints: latency-svc-c4t8p [1.675941703s] Aug 14 14:11:53.832: INFO: Created: latency-svc-ftmt9 Aug 14 14:11:53.895: INFO: Got endpoints: latency-svc-ftmt9 [1.770501891s] Aug 14 14:11:53.946: INFO: Created: latency-svc-xncnn Aug 14 14:11:53.967: INFO: Got endpoints: latency-svc-xncnn [1.799331191s] Aug 14 14:11:54.031: INFO: Created: latency-svc-nrbtb Aug 14 14:11:54.092: INFO: Got endpoints: latency-svc-nrbtb [1.808049727s] Aug 14 14:11:54.093: INFO: Created: latency-svc-mbltq Aug 14 14:11:54.193: INFO: Got endpoints: latency-svc-mbltq [1.790652422s] Aug 14 14:11:54.235: INFO: Created: latency-svc-r8zw7 Aug 14 14:11:54.257: INFO: Got endpoints: latency-svc-r8zw7 [1.68644658s] Aug 14 14:11:54.339: INFO: Created: latency-svc-48bnx Aug 14 14:11:54.379: INFO: Got endpoints: latency-svc-48bnx [1.660165119s] Aug 14 14:11:54.591: INFO: Created: latency-svc-znsc9 Aug 14 14:11:54.599: INFO: Got endpoints: latency-svc-znsc9 [1.732008433s] Aug 14 14:11:54.655: INFO: Created: latency-svc-mbtgc Aug 14 14:11:54.750: INFO: Got endpoints: latency-svc-mbtgc [1.742744421s] Aug 14 14:11:54.763: INFO: Created: latency-svc-lvwhd Aug 14 14:11:54.783: INFO: Got endpoints: latency-svc-lvwhd [1.681903447s] Aug 14 14:11:54.805: INFO: Created: latency-svc-7g2bv Aug 14 14:11:54.819: INFO: Got endpoints: latency-svc-7g2bv [1.608892934s] Aug 14 14:11:54.923: INFO: Created: latency-svc-4zdx2 Aug 14 14:11:54.940: INFO: Got endpoints: latency-svc-4zdx2 [1.598454544s] Aug 14 14:11:54.962: INFO: Created: latency-svc-z5wsk Aug 14 14:11:54.976: INFO: Got endpoints: latency-svc-z5wsk [1.561589634s] Aug 14 14:11:55.073: INFO: Created: latency-svc-vlttj Aug 14 14:11:55.087: INFO: Got endpoints: latency-svc-vlttj [1.552407321s] Aug 14 14:11:55.141: INFO: Created: latency-svc-ff45f Aug 14 14:11:55.205: INFO: Got endpoints: latency-svc-ff45f [1.575153361s] Aug 14 14:11:55.262: INFO: Created: latency-svc-9n5zh Aug 14 14:11:55.351: INFO: Got endpoints: latency-svc-9n5zh [1.584005948s] Aug 14 14:11:55.383: INFO: Created: latency-svc-h7k9v Aug 14 14:11:55.539: INFO: Got endpoints: latency-svc-h7k9v [1.644046781s] Aug 14 14:11:55.543: INFO: Created: latency-svc-4wffx Aug 14 14:11:55.575: INFO: Got endpoints: latency-svc-4wffx [1.607710406s] Aug 14 14:11:55.629: INFO: Created: latency-svc-fph86 Aug 14 14:11:55.675: INFO: Got endpoints: latency-svc-fph86 [1.582840251s] Aug 14 14:11:55.725: INFO: Created: latency-svc-vtrk7 Aug 14 14:11:55.894: INFO: Created: latency-svc-kdjxl Aug 14 14:11:55.895: INFO: Got endpoints: latency-svc-vtrk7 [1.702064695s] Aug 14 14:11:55.951: INFO: Got endpoints: latency-svc-kdjxl [275.377354ms] Aug 14 14:11:56.074: INFO: Created: latency-svc-pr6pv Aug 14 14:11:56.131: INFO: Got endpoints: latency-svc-pr6pv [1.874725431s] Aug 14 14:11:56.270: INFO: Created: latency-svc-mxtsj Aug 14 14:11:56.282: INFO: Got endpoints: latency-svc-mxtsj [1.903000616s] Aug 14 14:11:56.355: INFO: Created: latency-svc-48hkn Aug 14 14:11:56.500: INFO: Got endpoints: latency-svc-48hkn [1.900877508s] Aug 14 14:11:56.579: INFO: Created: latency-svc-fdwjj Aug 14 14:11:56.624: INFO: Got endpoints: latency-svc-fdwjj [1.873657301s] Aug 14 14:11:56.652: INFO: Created: latency-svc-mvf75 Aug 14 14:11:56.666: INFO: Got endpoints: latency-svc-mvf75 [1.883111299s] Aug 14 14:11:56.691: INFO: Created: latency-svc-qwjdc Aug 14 14:11:56.709: INFO: Got endpoints: latency-svc-qwjdc [1.889373907s] Aug 14 14:11:56.762: INFO: Created: latency-svc-vbg8d Aug 14 14:11:56.787: INFO: Got endpoints: latency-svc-vbg8d [1.847521312s] Aug 14 14:11:56.837: INFO: Created: latency-svc-qjz94 Aug 14 14:11:56.855: INFO: Got endpoints: latency-svc-qjz94 [1.878988941s] Aug 14 14:11:56.923: INFO: Created: latency-svc-9bsdk Aug 14 14:11:57.005: INFO: Got endpoints: latency-svc-9bsdk [1.91757886s] Aug 14 14:11:57.091: INFO: Created: latency-svc-m5frw Aug 14 14:11:57.118: INFO: Created: latency-svc-rzm5w Aug 14 14:11:57.118: INFO: Got endpoints: latency-svc-m5frw [1.912407259s] Aug 14 14:11:57.148: INFO: Got endpoints: latency-svc-rzm5w [1.797001614s] Aug 14 14:11:57.193: INFO: Created: latency-svc-gwcg2 Aug 14 14:11:57.259: INFO: Got endpoints: latency-svc-gwcg2 [1.719437378s] Aug 14 14:11:57.281: INFO: Created: latency-svc-wdzxs Aug 14 14:11:57.290: INFO: Got endpoints: latency-svc-wdzxs [1.715028087s] Aug 14 14:11:57.309: INFO: Created: latency-svc-ljfhj Aug 14 14:11:57.321: INFO: Got endpoints: latency-svc-ljfhj [1.425970014s] Aug 14 14:11:57.342: INFO: Created: latency-svc-q8662 Aug 14 14:11:57.357: INFO: Got endpoints: latency-svc-q8662 [1.406478293s] Aug 14 14:11:57.426: INFO: Created: latency-svc-q49bb Aug 14 14:11:57.434: INFO: Got endpoints: latency-svc-q49bb [1.302459584s] Aug 14 14:11:57.461: INFO: Created: latency-svc-sbvlc Aug 14 14:11:57.495: INFO: Got endpoints: latency-svc-sbvlc [1.212969022s] Aug 14 14:11:57.531: INFO: Created: latency-svc-vwzdz Aug 14 14:11:57.607: INFO: Got endpoints: latency-svc-vwzdz [1.107292033s] Aug 14 14:11:57.608: INFO: Created: latency-svc-rphnf Aug 14 14:11:57.616: INFO: Got endpoints: latency-svc-rphnf [991.774996ms] Aug 14 14:11:57.646: INFO: Created: latency-svc-q54lv Aug 14 14:11:57.659: INFO: Got endpoints: latency-svc-q54lv [992.814362ms] Aug 14 14:11:57.681: INFO: Created: latency-svc-mjqts Aug 14 14:11:57.697: INFO: Got endpoints: latency-svc-mjqts [987.436611ms] Aug 14 14:11:57.768: INFO: Created: latency-svc-rqn6n Aug 14 14:11:57.802: INFO: Created: latency-svc-cs84h Aug 14 14:11:57.803: INFO: Got endpoints: latency-svc-rqn6n [1.015181459s] Aug 14 14:11:57.851: INFO: Got endpoints: latency-svc-cs84h [995.585302ms] Aug 14 14:11:57.929: INFO: Created: latency-svc-7kjbq Aug 14 14:11:57.953: INFO: Got endpoints: latency-svc-7kjbq [948.522608ms] Aug 14 14:11:57.982: INFO: Created: latency-svc-cvk96 Aug 14 14:11:58.006: INFO: Got endpoints: latency-svc-cvk96 [888.240749ms] Aug 14 14:11:58.097: INFO: Created: latency-svc-wxgp2 Aug 14 14:11:58.099: INFO: Got endpoints: latency-svc-wxgp2 [950.93553ms] Aug 14 14:11:58.132: INFO: Created: latency-svc-5mgv5 Aug 14 14:11:58.146: INFO: Got endpoints: latency-svc-5mgv5 [886.357147ms] Aug 14 14:11:58.170: INFO: Created: latency-svc-7cgwt Aug 14 14:11:58.182: INFO: Got endpoints: latency-svc-7cgwt [891.514197ms] Aug 14 14:11:58.250: INFO: Created: latency-svc-qmrl9 Aug 14 14:11:58.252: INFO: Got endpoints: latency-svc-qmrl9 [930.833353ms] Aug 14 14:11:58.305: INFO: Created: latency-svc-wd5w4 Aug 14 14:11:58.322: INFO: Got endpoints: latency-svc-wd5w4 [964.405988ms] Aug 14 14:11:58.431: INFO: Created: latency-svc-b47fj Aug 14 14:11:58.468: INFO: Got endpoints: latency-svc-b47fj [1.033496952s] Aug 14 14:11:58.503: INFO: Created: latency-svc-62hrg Aug 14 14:11:58.528: INFO: Got endpoints: latency-svc-62hrg [1.0319368s] Aug 14 14:11:58.588: INFO: Created: latency-svc-zk2vx Aug 14 14:11:58.603: INFO: Got endpoints: latency-svc-zk2vx [995.360263ms] Aug 14 14:11:58.636: INFO: Created: latency-svc-lcsq4 Aug 14 14:11:58.652: INFO: Got endpoints: latency-svc-lcsq4 [1.03626303s] Aug 14 14:11:58.671: INFO: Created: latency-svc-xk726 Aug 14 14:11:58.737: INFO: Created: latency-svc-lfmtd Aug 14 14:11:58.738: INFO: Got endpoints: latency-svc-xk726 [1.078305531s] Aug 14 14:11:58.750: INFO: Got endpoints: latency-svc-lfmtd [1.052867323s] Aug 14 14:11:58.793: INFO: Created: latency-svc-689mp Aug 14 14:11:58.822: INFO: Got endpoints: latency-svc-689mp [1.018795669s] Aug 14 14:11:58.875: INFO: Created: latency-svc-gb49q Aug 14 14:11:58.888: INFO: Got endpoints: latency-svc-gb49q [1.037456441s] Aug 14 14:11:58.936: INFO: Created: latency-svc-b4nbg Aug 14 14:11:58.965: INFO: Got endpoints: latency-svc-b4nbg [1.011780907s] Aug 14 14:11:59.006: INFO: Created: latency-svc-tgsjl Aug 14 14:11:59.027: INFO: Got endpoints: latency-svc-tgsjl [1.020553204s] Aug 14 14:11:59.068: INFO: Created: latency-svc-h2w7d Aug 14 14:11:59.091: INFO: Got endpoints: latency-svc-h2w7d [991.819879ms] Aug 14 14:11:59.157: INFO: Created: latency-svc-9fqwz Aug 14 14:11:59.206: INFO: Created: latency-svc-pjfpn Aug 14 14:11:59.207: INFO: Got endpoints: latency-svc-9fqwz [1.060753096s] Aug 14 14:11:59.249: INFO: Got endpoints: latency-svc-pjfpn [1.066872078s] Aug 14 14:11:59.331: INFO: Created: latency-svc-8kvsf Aug 14 14:11:59.344: INFO: Got endpoints: latency-svc-8kvsf [1.092370888s] Aug 14 14:11:59.381: INFO: Created: latency-svc-d5trg Aug 14 14:11:59.399: INFO: Got endpoints: latency-svc-d5trg [1.07730729s] Aug 14 14:11:59.417: INFO: Created: latency-svc-p65wk Aug 14 14:11:59.588: INFO: Got endpoints: latency-svc-p65wk [1.119860464s] Aug 14 14:11:59.768: INFO: Created: latency-svc-f4cct Aug 14 14:11:59.795: INFO: Created: latency-svc-f6xp4 Aug 14 14:11:59.795: INFO: Got endpoints: latency-svc-f4cct [1.267356934s] Aug 14 14:11:59.830: INFO: Got endpoints: latency-svc-f6xp4 [1.226931467s] Aug 14 14:11:59.954: INFO: Created: latency-svc-qsgcq Aug 14 14:11:59.999: INFO: Got endpoints: latency-svc-qsgcq [1.346812458s] Aug 14 14:12:00.000: INFO: Created: latency-svc-gwfdz Aug 14 14:12:00.046: INFO: Got endpoints: latency-svc-gwfdz [1.308494009s] Aug 14 14:12:00.133: INFO: Created: latency-svc-n7fgg Aug 14 14:12:00.143: INFO: Got endpoints: latency-svc-n7fgg [1.392855729s] Aug 14 14:12:00.204: INFO: Created: latency-svc-65rt7 Aug 14 14:12:00.291: INFO: Got endpoints: latency-svc-65rt7 [1.469167984s] Aug 14 14:12:00.291: INFO: Created: latency-svc-ss9fr Aug 14 14:12:00.311: INFO: Got endpoints: latency-svc-ss9fr [1.422920929s] Aug 14 14:12:00.358: INFO: Created: latency-svc-c7mm2 Aug 14 14:12:00.438: INFO: Got endpoints: latency-svc-c7mm2 [1.472108681s] Aug 14 14:12:00.487: INFO: Created: latency-svc-mszh9 Aug 14 14:12:00.498: INFO: Got endpoints: latency-svc-mszh9 [1.471280473s] Aug 14 14:12:00.605: INFO: Created: latency-svc-rckjh Aug 14 14:12:00.609: INFO: Got endpoints: latency-svc-rckjh [1.5174295s] Aug 14 14:12:00.654: INFO: Created: latency-svc-z7vtr Aug 14 14:12:00.677: INFO: Got endpoints: latency-svc-z7vtr [1.470709944s] Aug 14 14:12:00.779: INFO: Created: latency-svc-b8jqk Aug 14 14:12:00.809: INFO: Got endpoints: latency-svc-b8jqk [1.559526941s] Aug 14 14:12:00.812: INFO: Created: latency-svc-hdx7h Aug 14 14:12:00.833: INFO: Got endpoints: latency-svc-hdx7h [1.488081145s] Aug 14 14:12:00.863: INFO: Created: latency-svc-kjbrz Aug 14 14:12:00.877: INFO: Got endpoints: latency-svc-kjbrz [1.477406532s] Aug 14 14:12:00.953: INFO: Created: latency-svc-6cwhj Aug 14 14:12:00.959: INFO: Got endpoints: latency-svc-6cwhj [1.371290512s] Aug 14 14:12:01.008: INFO: Created: latency-svc-kz9q9 Aug 14 14:12:01.028: INFO: Got endpoints: latency-svc-kz9q9 [1.232755163s] Aug 14 14:12:01.108: INFO: Created: latency-svc-7lv7m Aug 14 14:12:01.118: INFO: Got endpoints: latency-svc-7lv7m [1.28743259s] Aug 14 14:12:01.157: INFO: Created: latency-svc-jdn8v Aug 14 14:12:01.166: INFO: Got endpoints: latency-svc-jdn8v [1.166600807s] Aug 14 14:12:01.193: INFO: Created: latency-svc-vskhx Aug 14 14:12:01.247: INFO: Got endpoints: latency-svc-vskhx [1.200129593s] Aug 14 14:12:01.469: INFO: Created: latency-svc-rs7s6 Aug 14 14:12:01.474: INFO: Got endpoints: latency-svc-rs7s6 [1.331343357s] Aug 14 14:12:01.553: INFO: Created: latency-svc-xjzs9 Aug 14 14:12:01.630: INFO: Got endpoints: latency-svc-xjzs9 [1.338560156s] Aug 14 14:12:01.668: INFO: Created: latency-svc-2dz82 Aug 14 14:12:01.684: INFO: Got endpoints: latency-svc-2dz82 [1.371934052s] Aug 14 14:12:01.715: INFO: Created: latency-svc-6jfqt Aug 14 14:12:01.870: INFO: Got endpoints: latency-svc-6jfqt [1.432146164s] Aug 14 14:12:01.872: INFO: Created: latency-svc-zl6vm Aug 14 14:12:01.965: INFO: Got endpoints: latency-svc-zl6vm [1.465952344s] Aug 14 14:12:02.072: INFO: Created: latency-svc-z4pzj Aug 14 14:12:02.193: INFO: Got endpoints: latency-svc-z4pzj [1.583701491s] Aug 14 14:12:02.194: INFO: Created: latency-svc-7vp2b Aug 14 14:12:02.282: INFO: Got endpoints: latency-svc-7vp2b [1.604635061s] Aug 14 14:12:03.219: INFO: Created: latency-svc-scnrk Aug 14 14:12:03.273: INFO: Got endpoints: latency-svc-scnrk [2.463546033s] Aug 14 14:12:03.506: INFO: Created: latency-svc-2q6rt Aug 14 14:12:03.571: INFO: Got endpoints: latency-svc-2q6rt [2.737822056s] Aug 14 14:12:04.718: INFO: Created: latency-svc-4pddb Aug 14 14:12:05.059: INFO: Got endpoints: latency-svc-4pddb [4.182280537s] Aug 14 14:12:05.066: INFO: Created: latency-svc-8l2cq Aug 14 14:12:05.090: INFO: Got endpoints: latency-svc-8l2cq [4.130701822s] Aug 14 14:12:05.126: INFO: Created: latency-svc-4vjdj Aug 14 14:12:05.198: INFO: Got endpoints: latency-svc-4vjdj [4.169646513s] Aug 14 14:12:05.209: INFO: Created: latency-svc-vkhfd Aug 14 14:12:05.250: INFO: Got endpoints: latency-svc-vkhfd [4.132562201s] Aug 14 14:12:05.281: INFO: Created: latency-svc-wjmns Aug 14 14:12:05.403: INFO: Got endpoints: latency-svc-wjmns [4.236274684s] Aug 14 14:12:05.477: INFO: Created: latency-svc-68kd5 Aug 14 14:12:06.282: INFO: Got endpoints: latency-svc-68kd5 [5.03479047s] Aug 14 14:12:06.286: INFO: Created: latency-svc-8q2sz Aug 14 14:12:06.578: INFO: Got endpoints: latency-svc-8q2sz [5.103238223s] Aug 14 14:12:06.650: INFO: Created: latency-svc-mm9mk Aug 14 14:12:06.737: INFO: Got endpoints: latency-svc-mm9mk [5.10722623s] Aug 14 14:12:06.831: INFO: Created: latency-svc-q9pd7 Aug 14 14:12:06.940: INFO: Got endpoints: latency-svc-q9pd7 [5.256179556s] Aug 14 14:12:06.987: INFO: Created: latency-svc-sz8x2 Aug 14 14:12:07.007: INFO: Got endpoints: latency-svc-sz8x2 [5.137237426s] Aug 14 14:12:07.365: INFO: Created: latency-svc-9z46f Aug 14 14:12:07.505: INFO: Got endpoints: latency-svc-9z46f [5.539929046s] Aug 14 14:12:07.529: INFO: Created: latency-svc-ztkp6 Aug 14 14:12:07.601: INFO: Got endpoints: latency-svc-ztkp6 [5.408303845s] Aug 14 14:12:07.726: INFO: Created: latency-svc-d6s7w Aug 14 14:12:07.757: INFO: Got endpoints: latency-svc-d6s7w [5.474624441s] Aug 14 14:12:07.818: INFO: Created: latency-svc-tr6b4 Aug 14 14:12:07.960: INFO: Got endpoints: latency-svc-tr6b4 [4.686711199s] Aug 14 14:12:08.187: INFO: Created: latency-svc-tf4nh Aug 14 14:12:08.214: INFO: Got endpoints: latency-svc-tf4nh [4.64306394s] Aug 14 14:12:08.219: INFO: Created: latency-svc-xccfg Aug 14 14:12:08.255: INFO: Got endpoints: latency-svc-xccfg [3.195555543s] Aug 14 14:12:08.360: INFO: Created: latency-svc-xhns8 Aug 14 14:12:08.406: INFO: Created: latency-svc-pzdg6 Aug 14 14:12:08.407: INFO: Got endpoints: latency-svc-xhns8 [3.316100627s] Aug 14 14:12:08.421: INFO: Got endpoints: latency-svc-pzdg6 [3.223205351s] Aug 14 14:12:08.517: INFO: Created: latency-svc-sh22g Aug 14 14:12:08.531: INFO: Got endpoints: latency-svc-sh22g [3.280397404s] Aug 14 14:12:08.586: INFO: Created: latency-svc-kb2kb Aug 14 14:12:08.729: INFO: Got endpoints: latency-svc-kb2kb [3.325599439s] Aug 14 14:12:09.311: INFO: Created: latency-svc-2tft2 Aug 14 14:12:09.825: INFO: Got endpoints: latency-svc-2tft2 [3.542179032s] Aug 14 14:12:10.554: INFO: Created: latency-svc-lb7ml Aug 14 14:12:10.598: INFO: Got endpoints: latency-svc-lb7ml [4.019882581s] Aug 14 14:12:10.912: INFO: Created: latency-svc-flgxz Aug 14 14:12:11.011: INFO: Created: latency-svc-gp865 Aug 14 14:12:11.012: INFO: Got endpoints: latency-svc-flgxz [4.274257889s] Aug 14 14:12:11.016: INFO: Got endpoints: latency-svc-gp865 [4.075794072s] Aug 14 14:12:11.185: INFO: Created: latency-svc-vbp4n Aug 14 14:12:11.190: INFO: Got endpoints: latency-svc-vbp4n [4.182070693s] Aug 14 14:12:11.262: INFO: Created: latency-svc-bjrzt Aug 14 14:12:11.330: INFO: Created: latency-svc-qxfp6 Aug 14 14:12:11.333: INFO: Got endpoints: latency-svc-bjrzt [3.827834274s] Aug 14 14:12:11.353: INFO: Got endpoints: latency-svc-qxfp6 [3.751380603s] Aug 14 14:12:11.390: INFO: Created: latency-svc-rqc8z Aug 14 14:12:11.408: INFO: Got endpoints: latency-svc-rqc8z [3.650192816s] Aug 14 14:12:11.425: INFO: Created: latency-svc-6h9km Aug 14 14:12:11.497: INFO: Got endpoints: latency-svc-6h9km [3.537518649s] Aug 14 14:12:11.501: INFO: Created: latency-svc-fcdfx Aug 14 14:12:11.521: INFO: Got endpoints: latency-svc-fcdfx [3.307182578s] Aug 14 14:12:11.547: INFO: Created: latency-svc-pqhc6 Aug 14 14:12:11.563: INFO: Got endpoints: latency-svc-pqhc6 [3.307978379s] Aug 14 14:12:11.642: INFO: Created: latency-svc-pqsct Aug 14 14:12:11.654: INFO: Got endpoints: latency-svc-pqsct [3.247450398s] Aug 14 14:12:11.677: INFO: Created: latency-svc-kdtwk Aug 14 14:12:11.713: INFO: Got endpoints: latency-svc-kdtwk [3.291475876s] Aug 14 14:12:11.780: INFO: Created: latency-svc-vbhg8 Aug 14 14:12:11.804: INFO: Got endpoints: latency-svc-vbhg8 [3.273226871s] Aug 14 14:12:11.833: INFO: Created: latency-svc-9kqjr Aug 14 14:12:11.846: INFO: Got endpoints: latency-svc-9kqjr [3.117314738s] Aug 14 14:12:11.960: INFO: Created: latency-svc-ph9m6 Aug 14 14:12:12.010: INFO: Got endpoints: latency-svc-ph9m6 [2.184870487s] Aug 14 14:12:12.050: INFO: Created: latency-svc-qfz2z Aug 14 14:12:12.086: INFO: Got endpoints: latency-svc-qfz2z [1.488182382s] Aug 14 14:12:12.097: INFO: Created: latency-svc-wfqz4 Aug 14 14:12:12.112: INFO: Got endpoints: latency-svc-wfqz4 [1.099736563s] Aug 14 14:12:12.134: INFO: Created: latency-svc-9xjph Aug 14 14:12:12.147: INFO: Got endpoints: latency-svc-9xjph [1.13132661s] Aug 14 14:12:12.166: INFO: Created: latency-svc-wgnpd Aug 14 14:12:12.187: INFO: Got endpoints: latency-svc-wgnpd [997.188191ms] Aug 14 14:12:12.247: INFO: Created: latency-svc-fmcpr Aug 14 14:12:12.278: INFO: Got endpoints: latency-svc-fmcpr [945.190759ms] Aug 14 14:12:12.282: INFO: Created: latency-svc-v2rxd Aug 14 14:12:12.414: INFO: Got endpoints: latency-svc-v2rxd [1.061292641s] Aug 14 14:12:12.418: INFO: Created: latency-svc-n9sws Aug 14 14:12:12.461: INFO: Got endpoints: latency-svc-n9sws [1.052985949s] Aug 14 14:12:13.268: INFO: Created: latency-svc-j5dv9 Aug 14 14:12:13.318: INFO: Got endpoints: latency-svc-j5dv9 [1.820153855s] Aug 14 14:12:13.647: INFO: Created: latency-svc-hvnxk Aug 14 14:12:13.828: INFO: Got endpoints: latency-svc-hvnxk [2.306167763s] Aug 14 14:12:13.911: INFO: Created: latency-svc-86hr9 Aug 14 14:12:14.037: INFO: Got endpoints: latency-svc-86hr9 [2.473339921s] Aug 14 14:12:14.093: INFO: Created: latency-svc-x6fdw Aug 14 14:12:14.125: INFO: Got endpoints: latency-svc-x6fdw [2.4707696s] Aug 14 14:12:14.210: INFO: Created: latency-svc-422jf Aug 14 14:12:14.310: INFO: Got endpoints: latency-svc-422jf [2.59683348s] Aug 14 14:12:14.493: INFO: Created: latency-svc-6v9s2 Aug 14 14:12:14.505: INFO: Got endpoints: latency-svc-6v9s2 [2.700274057s] Aug 14 14:12:14.591: INFO: Created: latency-svc-n7qcx Aug 14 14:12:14.719: INFO: Got endpoints: latency-svc-n7qcx [2.873022401s] Aug 14 14:12:14.882: INFO: Created: latency-svc-gd846 Aug 14 14:12:14.892: INFO: Got endpoints: latency-svc-gd846 [2.882127619s] Aug 14 14:12:15.559: INFO: Created: latency-svc-q6r5d Aug 14 14:12:15.948: INFO: Got endpoints: latency-svc-q6r5d [3.861236636s] Aug 14 14:12:15.954: INFO: Created: latency-svc-6ss2s Aug 14 14:12:16.274: INFO: Got endpoints: latency-svc-6ss2s [4.161934493s] Aug 14 14:12:16.546: INFO: Created: latency-svc-v992z Aug 14 14:12:16.585: INFO: Got endpoints: latency-svc-v992z [4.437735458s] Aug 14 14:12:16.935: INFO: Created: latency-svc-wt84t Aug 14 14:12:17.151: INFO: Got endpoints: latency-svc-wt84t [4.963536079s] Aug 14 14:12:17.180: INFO: Created: latency-svc-tfwhs Aug 14 14:12:17.245: INFO: Got endpoints: latency-svc-tfwhs [4.967098536s] Aug 14 14:12:17.404: INFO: Created: latency-svc-rbvr8 Aug 14 14:12:17.469: INFO: Got endpoints: latency-svc-rbvr8 [5.054254268s] Aug 14 14:12:17.593: INFO: Created: latency-svc-mdwp8 Aug 14 14:12:17.636: INFO: Got endpoints: latency-svc-mdwp8 [5.175511746s] Aug 14 14:12:17.637: INFO: Created: latency-svc-8bnn7 Aug 14 14:12:17.767: INFO: Got endpoints: latency-svc-8bnn7 [4.449163318s] Aug 14 14:12:17.815: INFO: Created: latency-svc-drjpk Aug 14 14:12:17.854: INFO: Got endpoints: latency-svc-drjpk [4.025756135s] Aug 14 14:12:17.923: INFO: Created: latency-svc-84szc Aug 14 14:12:17.955: INFO: Got endpoints: latency-svc-84szc [3.918104241s] Aug 14 14:12:18.009: INFO: Created: latency-svc-qkxzg Aug 14 14:12:18.050: INFO: Got endpoints: latency-svc-qkxzg [3.92492254s] Aug 14 14:12:18.052: INFO: Latencies: [170.461673ms 275.377354ms 292.456018ms 431.023239ms 489.57435ms 886.357147ms 888.240749ms 891.514197ms 922.142632ms 930.833353ms 945.190759ms 948.522608ms 950.93553ms 963.721258ms 964.405988ms 987.436611ms 991.774996ms 991.819879ms 992.814362ms 995.360263ms 995.585302ms 997.188191ms 1.011780907s 1.015181459s 1.018795669s 1.020553204s 1.0319368s 1.033496952s 1.03626303s 1.037456441s 1.052867323s 1.052985949s 1.060753096s 1.061292641s 1.066872078s 1.07730729s 1.078305531s 1.092370888s 1.099736563s 1.107292033s 1.118106218s 1.119860464s 1.13132661s 1.150842411s 1.166600807s 1.171418524s 1.188888966s 1.19687067s 1.200129593s 1.208175827s 1.212969022s 1.224974775s 1.226931467s 1.231265883s 1.232755163s 1.245208244s 1.255205467s 1.267356934s 1.28743259s 1.302459584s 1.308494009s 1.322494582s 1.331343357s 1.338560156s 1.346812458s 1.347964816s 1.365152328s 1.371290512s 1.371934052s 1.392855729s 1.406478293s 1.422920929s 1.425970014s 1.432146164s 1.45472434s 1.465952344s 1.469167984s 1.470709944s 1.471280473s 1.472108681s 1.477406532s 1.488081145s 1.488182382s 1.5174295s 1.520852791s 1.53075431s 1.543957568s 1.552407321s 1.555966108s 1.559526941s 1.561589634s 1.562861603s 1.575153361s 1.582840251s 1.583701491s 1.584005948s 1.58450689s 1.598454544s 1.604635061s 1.607710406s 1.608892934s 1.618357297s 1.625308296s 1.644046781s 1.660165119s 1.675941703s 1.681903447s 1.68644658s 1.702064695s 1.715028087s 1.719437378s 1.732008433s 1.742744421s 1.770501891s 1.790652422s 1.797001614s 1.799331191s 1.808049727s 1.820153855s 1.847521312s 1.873657301s 1.874725431s 1.878988941s 1.883111299s 1.88641241s 1.889373907s 1.900877508s 1.903000616s 1.904576127s 1.912407259s 1.91757886s 2.082058069s 2.112737403s 2.127341571s 2.184870487s 2.291567016s 2.306167763s 2.447866013s 2.463546033s 2.4707696s 2.472999589s 2.473339921s 2.480321569s 2.48562636s 2.531038405s 2.542889602s 2.59683348s 2.597470211s 2.619725643s 2.668276382s 2.700274057s 2.737822056s 2.873022401s 2.882127619s 3.117314738s 3.195555543s 3.223205351s 3.247450398s 3.273226871s 3.280397404s 3.291475876s 3.307182578s 3.307978379s 3.316100627s 3.325599439s 3.537518649s 3.542179032s 3.650192816s 3.751380603s 3.827834274s 3.861236636s 3.918104241s 3.92492254s 4.019882581s 4.025756135s 4.075794072s 4.130701822s 4.132562201s 4.161934493s 4.169646513s 4.182070693s 4.182280537s 4.236274684s 4.274257889s 4.437735458s 4.449163318s 4.64306394s 4.686711199s 4.963536079s 4.967098536s 5.03479047s 5.054254268s 5.103238223s 5.10722623s 5.137237426s 5.175511746s 5.256179556s 5.408303845s 5.474624441s 5.539929046s] Aug 14 14:12:18.053: INFO: 50 %ile: 1.608892934s Aug 14 14:12:18.054: INFO: 90 %ile: 4.182070693s Aug 14 14:12:18.054: INFO: 99 %ile: 5.474624441s Aug 14 14:12:18.054: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:12:18.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6128" for this suite. • [SLOW TEST:37.512 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:12:18.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-22a15168-4976-4583-8e75-0106f2fd7c89 STEP: Creating a pod to test consume secrets Aug 14 14:12:18.323: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-50222342-6409-40a5-90a0-4ddd84db1625" in namespace "projected-5870" to be "Succeeded or Failed" Aug 14 14:12:18.345: INFO: Pod "pod-projected-secrets-50222342-6409-40a5-90a0-4ddd84db1625": Phase="Pending", Reason="", readiness=false. Elapsed: 21.844128ms Aug 14 14:12:20.432: INFO: Pod "pod-projected-secrets-50222342-6409-40a5-90a0-4ddd84db1625": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108805934s Aug 14 14:12:23.380: INFO: Pod "pod-projected-secrets-50222342-6409-40a5-90a0-4ddd84db1625": Phase="Pending", Reason="", readiness=false. Elapsed: 5.056653074s Aug 14 14:12:26.319: INFO: Pod "pod-projected-secrets-50222342-6409-40a5-90a0-4ddd84db1625": Phase="Pending", Reason="", readiness=false. Elapsed: 7.995279911s Aug 14 14:12:29.123: INFO: Pod "pod-projected-secrets-50222342-6409-40a5-90a0-4ddd84db1625": Phase="Running", Reason="", readiness=true. Elapsed: 10.799845006s Aug 14 14:12:33.396: INFO: Pod "pod-projected-secrets-50222342-6409-40a5-90a0-4ddd84db1625": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.072562466s STEP: Saw pod success Aug 14 14:12:33.397: INFO: Pod "pod-projected-secrets-50222342-6409-40a5-90a0-4ddd84db1625" satisfied condition "Succeeded or Failed" Aug 14 14:12:34.505: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-50222342-6409-40a5-90a0-4ddd84db1625 container projected-secret-volume-test: STEP: delete the pod Aug 14 14:12:41.046: INFO: Waiting for pod pod-projected-secrets-50222342-6409-40a5-90a0-4ddd84db1625 to disappear Aug 14 14:12:41.355: INFO: Pod pod-projected-secrets-50222342-6409-40a5-90a0-4ddd84db1625 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:12:41.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5870" for this suite. • [SLOW TEST:25.354 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":14,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:12:43.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Aug 14 14:12:47.476: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 14 14:12:49.576: INFO: Waiting for terminating namespaces to be deleted... Aug 14 14:12:49.832: INFO: Logging pods the kubelet thinks is on node kali-worker before test Aug 14 14:12:50.283: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.284: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 14 14:12:50.284: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.284: INFO: Container kindnet-cni ready: true, restart count 1 Aug 14 14:12:50.284: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.284: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 14 14:12:50.284: INFO: rally-1cd7c17f-v3nmd8vz-589d46bdb9-zmgmx from c-rally-1cd7c17f-mfngbtu6 started at 2020-08-13 20:24:45 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.285: INFO: Container rally-1cd7c17f-v3nmd8vz ready: true, restart count 0 Aug 14 14:12:50.285: INFO: rally-466602a1-db17uwyh-z26cp from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:59:13 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.285: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 14 14:12:50.285: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.285: INFO: Container kube-proxy ready: true, restart count 0 Aug 14 14:12:50.285: INFO: rally-466602a1-db17uwyh-6xgdb from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:51:36 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.285: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 14 14:12:50.285: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Aug 14 14:12:50.315: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.315: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 14 14:12:50.315: INFO: rally-43093f20-h0e1n44t from c-rally-43093f20-l7nso5gb started at 2020-08-14 14:12:23 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.315: INFO: Container rally-43093f20-h0e1n44t ready: true, restart count 0 Aug 14 14:12:50.315: INFO: rally-7104017d-j5l4uv4e-0 from c-rally-7104017d-2oejvhl7 started at 2020-08-11 18:51:39 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.315: INFO: Container rally-7104017d-j5l4uv4e ready: true, restart count 1 Aug 14 14:12:50.315: INFO: rally-1cd7c17f-v3nmd8vz-589d46bdb9-h9wtg from c-rally-1cd7c17f-mfngbtu6 started at 2020-08-13 20:24:47 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.315: INFO: Container rally-1cd7c17f-v3nmd8vz ready: true, restart count 0 Aug 14 14:12:50.315: INFO: rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh from c-rally-6c5ea4be-pyo3sp3v started at 2020-08-11 18:16:03 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.315: INFO: Container rally-6c5ea4be-96nyoha6 ready: true, restart count 72 Aug 14 14:12:50.315: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.315: INFO: Container kube-proxy ready: true, restart count 0 Aug 14 14:12:50.315: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.315: INFO: Container kindnet-cni ready: true, restart count 1 Aug 14 14:12:50.315: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 14 14:12:50.316: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a5ee2614-1605-49a4-b0e4-619b6519608d 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-a5ee2614-1605-49a4-b0e4-619b6519608d off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-a5ee2614-1605-49a4-b0e4-619b6519608d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:13:26.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3973" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:43.497 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":3,"skipped":31,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:13:26.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Aug 14 14:13:40.089: INFO: Successfully updated pod "annotationupdate2edab754-38cd-487f-b287-5ac9dae91aa3" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:13:42.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2955" for this suite. • [SLOW TEST:16.292 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:13:43.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:13:48.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3485" for this suite. STEP: Destroying namespace "nspatchtest-4b19582d-0f1d-4379-aa2d-44034d3e0f45-954" for this suite. • [SLOW TEST:6.184 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":5,"skipped":64,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:13:49.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9845 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9845 I0814 14:13:54.301898 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9845, replica count: 2 I0814 14:13:57.353756 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:14:00.354293 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:14:03.354969 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:14:06.355716 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:14:09.356321 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 14 14:14:09.356: INFO: Creating new exec pod Aug 14 14:14:20.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-9845 execpodqcpww -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 14 14:14:33.063: INFO: stderr: "I0814 14:14:32.871511 37 log.go:172] (0x4000bea160) (0x40009ea140) Create stream\nI0814 14:14:32.874582 37 log.go:172] (0x4000bea160) (0x40009ea140) Stream added, broadcasting: 1\nI0814 14:14:32.885477 37 log.go:172] (0x4000bea160) Reply frame received for 1\nI0814 14:14:32.886224 37 log.go:172] (0x4000bea160) (0x40009ea1e0) Create stream\nI0814 14:14:32.886299 37 log.go:172] (0x4000bea160) (0x40009ea1e0) Stream added, broadcasting: 3\nI0814 14:14:32.887768 37 log.go:172] (0x4000bea160) Reply frame received for 3\nI0814 14:14:32.888102 37 log.go:172] (0x4000bea160) (0x4000a5c0a0) Create stream\nI0814 14:14:32.888167 37 log.go:172] (0x4000bea160) (0x4000a5c0a0) Stream added, broadcasting: 5\nI0814 14:14:32.889610 37 log.go:172] (0x4000bea160) Reply frame received for 5\nI0814 14:14:33.002177 37 log.go:172] (0x4000bea160) Data frame received for 5\nI0814 14:14:33.002703 37 log.go:172] (0x4000a5c0a0) (5) Data frame handling\nI0814 14:14:33.003968 37 log.go:172] (0x4000a5c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0814 14:14:33.043508 37 log.go:172] (0x4000bea160) Data frame received for 5\nI0814 14:14:33.043691 37 log.go:172] (0x4000a5c0a0) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0814 14:14:33.043946 37 log.go:172] (0x4000bea160) Data frame received for 3\nI0814 14:14:33.044092 37 log.go:172] (0x40009ea1e0) (3) Data frame handling\nI0814 14:14:33.044185 37 log.go:172] (0x4000a5c0a0) (5) Data frame sent\nI0814 14:14:33.044305 37 log.go:172] (0x4000bea160) Data frame received for 5\nI0814 14:14:33.044401 37 log.go:172] (0x4000a5c0a0) (5) Data frame handling\nI0814 14:14:33.045387 37 log.go:172] (0x4000bea160) Data frame received for 1\nI0814 14:14:33.045464 37 log.go:172] (0x40009ea140) (1) Data frame handling\nI0814 14:14:33.045532 37 log.go:172] (0x40009ea140) (1) Data frame sent\nI0814 14:14:33.046229 37 log.go:172] (0x4000bea160) (0x40009ea140) Stream removed, broadcasting: 1\nI0814 14:14:33.048043 37 log.go:172] (0x4000bea160) Go away received\nI0814 14:14:33.050802 37 log.go:172] (0x4000bea160) (0x40009ea140) Stream removed, broadcasting: 1\nI0814 14:14:33.051052 37 log.go:172] (0x4000bea160) (0x40009ea1e0) Stream removed, broadcasting: 3\nI0814 14:14:33.051266 37 log.go:172] (0x4000bea160) (0x4000a5c0a0) Stream removed, broadcasting: 5\n" Aug 14 14:14:33.065: INFO: stdout: "" Aug 14 14:14:33.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-9845 execpodqcpww -- /bin/sh -x -c nc -zv -t -w 2 10.110.87.181 80' Aug 14 14:14:34.727: INFO: stderr: "I0814 14:14:34.631587 65 log.go:172] (0x400003afd0) (0x400080b720) Create stream\nI0814 14:14:34.634737 65 log.go:172] (0x400003afd0) (0x400080b720) Stream added, broadcasting: 1\nI0814 14:14:34.645893 65 log.go:172] (0x400003afd0) Reply frame received for 1\nI0814 14:14:34.646570 65 log.go:172] (0x400003afd0) (0x4000746000) Create stream\nI0814 14:14:34.646640 65 log.go:172] (0x400003afd0) (0x4000746000) Stream added, broadcasting: 3\nI0814 14:14:34.648138 65 log.go:172] (0x400003afd0) Reply frame received for 3\nI0814 14:14:34.648532 65 log.go:172] (0x400003afd0) (0x400080b7c0) Create stream\nI0814 14:14:34.648641 65 log.go:172] (0x400003afd0) (0x400080b7c0) Stream added, broadcasting: 5\nI0814 14:14:34.650188 65 log.go:172] (0x400003afd0) Reply frame received for 5\nI0814 14:14:34.703958 65 log.go:172] (0x400003afd0) Data frame received for 3\nI0814 14:14:34.704229 65 log.go:172] (0x400003afd0) Data frame received for 1\nI0814 14:14:34.704702 65 log.go:172] (0x400080b720) (1) Data frame handling\nI0814 14:14:34.704988 65 log.go:172] (0x4000746000) (3) Data frame handling\nI0814 14:14:34.706091 65 log.go:172] (0x400003afd0) Data frame received for 5\nI0814 14:14:34.706237 65 log.go:172] (0x400080b7c0) (5) Data frame handling\nI0814 14:14:34.707165 65 log.go:172] (0x400080b7c0) (5) Data frame sent\n+ nc -zv -t -w 2 10.110.87.181 80\nConnection to 10.110.87.181 80 port [tcp/http] succeeded!\nI0814 14:14:34.708540 65 log.go:172] (0x400003afd0) Data frame received for 5\nI0814 14:14:34.708663 65 log.go:172] (0x400080b7c0) (5) Data frame handling\nI0814 14:14:34.709223 65 log.go:172] (0x400080b720) (1) Data frame sent\nI0814 14:14:34.710179 65 log.go:172] (0x400003afd0) (0x400080b720) Stream removed, broadcasting: 1\nI0814 14:14:34.712297 65 log.go:172] (0x400003afd0) Go away received\nI0814 14:14:34.715865 65 log.go:172] (0x400003afd0) (0x400080b720) Stream removed, broadcasting: 1\nI0814 14:14:34.716107 65 log.go:172] (0x400003afd0) (0x4000746000) Stream removed, broadcasting: 3\nI0814 14:14:34.716261 65 log.go:172] (0x400003afd0) (0x400080b7c0) Stream removed, broadcasting: 5\n" Aug 14 14:14:34.727: INFO: stdout: "" Aug 14 14:14:34.728: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:14:35.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9845" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:45.721 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":6,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:14:35.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:14:35.489: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:14:41.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1843" for this suite. • [SLOW TEST:7.151 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:14:42.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-b32d9797-1dbd-46f1-9544-23e46e0884ac STEP: Creating a pod to test consume secrets Aug 14 14:14:44.410: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-77749a3e-d70f-44b3-ba06-bfbda8118136" in namespace "projected-9007" to be "Succeeded or Failed" Aug 14 14:14:44.453: INFO: Pod "pod-projected-secrets-77749a3e-d70f-44b3-ba06-bfbda8118136": Phase="Pending", Reason="", readiness=false. Elapsed: 43.002163ms Aug 14 14:14:46.556: INFO: Pod "pod-projected-secrets-77749a3e-d70f-44b3-ba06-bfbda8118136": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146365705s Aug 14 14:14:49.326: INFO: Pod "pod-projected-secrets-77749a3e-d70f-44b3-ba06-bfbda8118136": Phase="Pending", Reason="", readiness=false. Elapsed: 4.916064983s Aug 14 14:14:52.336: INFO: Pod "pod-projected-secrets-77749a3e-d70f-44b3-ba06-bfbda8118136": Phase="Running", Reason="", readiness=true. Elapsed: 7.925557971s Aug 14 14:14:54.396: INFO: Pod "pod-projected-secrets-77749a3e-d70f-44b3-ba06-bfbda8118136": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.985614207s STEP: Saw pod success Aug 14 14:14:54.396: INFO: Pod "pod-projected-secrets-77749a3e-d70f-44b3-ba06-bfbda8118136" satisfied condition "Succeeded or Failed" Aug 14 14:14:54.548: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-77749a3e-d70f-44b3-ba06-bfbda8118136 container projected-secret-volume-test: STEP: delete the pod Aug 14 14:14:54.796: INFO: Waiting for pod pod-projected-secrets-77749a3e-d70f-44b3-ba06-bfbda8118136 to disappear Aug 14 14:14:54.827: INFO: Pod pod-projected-secrets-77749a3e-d70f-44b3-ba06-bfbda8118136 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:14:54.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9007" for this suite. • [SLOW TEST:12.671 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:14:55.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 14:14:55.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d65efff1-8df4-4d32-943a-3a18b383a47b" in namespace "projected-9006" to be "Succeeded or Failed" Aug 14 14:14:55.919: INFO: Pod "downwardapi-volume-d65efff1-8df4-4d32-943a-3a18b383a47b": Phase="Pending", Reason="", readiness=false. Elapsed: 223.976702ms Aug 14 14:14:58.512: INFO: Pod "downwardapi-volume-d65efff1-8df4-4d32-943a-3a18b383a47b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.816566389s Aug 14 14:15:00.610: INFO: Pod "downwardapi-volume-d65efff1-8df4-4d32-943a-3a18b383a47b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.914259218s Aug 14 14:15:02.917: INFO: Pod "downwardapi-volume-d65efff1-8df4-4d32-943a-3a18b383a47b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.221461216s Aug 14 14:15:04.963: INFO: Pod "downwardapi-volume-d65efff1-8df4-4d32-943a-3a18b383a47b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.26771804s STEP: Saw pod success Aug 14 14:15:04.963: INFO: Pod "downwardapi-volume-d65efff1-8df4-4d32-943a-3a18b383a47b" satisfied condition "Succeeded or Failed" Aug 14 14:15:05.003: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-d65efff1-8df4-4d32-943a-3a18b383a47b container client-container: STEP: delete the pod Aug 14 14:15:05.212: INFO: Waiting for pod downwardapi-volume-d65efff1-8df4-4d32-943a-3a18b383a47b to disappear Aug 14 14:15:05.229: INFO: Pod downwardapi-volume-d65efff1-8df4-4d32-943a-3a18b383a47b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:15:05.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9006" for this suite. • [SLOW TEST:10.377 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":169,"failed":0} SSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:15:05.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Aug 14 14:15:05.783: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Aug 14 14:15:05.827: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 14 14:15:05.831: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Aug 14 14:15:05.949: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 14 14:15:05.949: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Aug 14 14:15:06.094: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Aug 14 14:15:06.095: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Aug 14 14:15:14.818: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:15:15.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-9728" for this suite. • [SLOW TEST:10.256 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":10,"skipped":172,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:15:15.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 14:15:20.290: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 14:15:24.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:15:26.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:15:28.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:15:30.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011320, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 14:15:35.304: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Aug 14 14:15:36.032: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:15:36.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5767" for this suite. STEP: Destroying namespace "webhook-5767-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.661 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":11,"skipped":182,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:15:39.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-009fe53f-277d-4eea-a9ac-4ed403f2b9d9 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:15:51.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5171" for this suite. • [SLOW TEST:12.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":190,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:15:51.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 14 14:15:51.976: INFO: >>> kubeConfig: /root/.kube/config Aug 14 14:16:12.115: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:17:23.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2777" for this suite. • [SLOW TEST:92.354 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":13,"skipped":191,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:17:23.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-ef9f0a01-c01d-4486-9e77-cb4e0065907a STEP: Creating a pod to test consume configMaps Aug 14 14:17:24.295: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-413ae96c-36b6-451e-9ff0-b9feb5eb434a" in namespace "projected-6928" to be "Succeeded or Failed" Aug 14 14:17:24.343: INFO: Pod "pod-projected-configmaps-413ae96c-36b6-451e-9ff0-b9feb5eb434a": Phase="Pending", Reason="", readiness=false. Elapsed: 47.220182ms Aug 14 14:17:26.357: INFO: Pod "pod-projected-configmaps-413ae96c-36b6-451e-9ff0-b9feb5eb434a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061849982s Aug 14 14:17:28.365: INFO: Pod "pod-projected-configmaps-413ae96c-36b6-451e-9ff0-b9feb5eb434a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069591587s Aug 14 14:17:30.435: INFO: Pod "pod-projected-configmaps-413ae96c-36b6-451e-9ff0-b9feb5eb434a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139708034s STEP: Saw pod success Aug 14 14:17:30.435: INFO: Pod "pod-projected-configmaps-413ae96c-36b6-451e-9ff0-b9feb5eb434a" satisfied condition "Succeeded or Failed" Aug 14 14:17:30.813: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-413ae96c-36b6-451e-9ff0-b9feb5eb434a container projected-configmap-volume-test: STEP: delete the pod Aug 14 14:17:31.294: INFO: Waiting for pod pod-projected-configmaps-413ae96c-36b6-451e-9ff0-b9feb5eb434a to disappear Aug 14 14:17:31.369: INFO: Pod pod-projected-configmaps-413ae96c-36b6-451e-9ff0-b9feb5eb434a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:17:31.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6928" for this suite. • [SLOW TEST:7.575 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":195,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:17:31.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Aug 14 14:17:31.694: INFO: Waiting up to 5m0s for pod "client-containers-8aacc4e6-a4f9-4ead-a5ad-60fbce23f115" in namespace "containers-2962" to be "Succeeded or Failed" Aug 14 14:17:31.724: INFO: Pod "client-containers-8aacc4e6-a4f9-4ead-a5ad-60fbce23f115": Phase="Pending", Reason="", readiness=false. Elapsed: 29.259145ms Aug 14 14:17:34.281: INFO: Pod "client-containers-8aacc4e6-a4f9-4ead-a5ad-60fbce23f115": Phase="Pending", Reason="", readiness=false. Elapsed: 2.586229461s Aug 14 14:17:36.287: INFO: Pod "client-containers-8aacc4e6-a4f9-4ead-a5ad-60fbce23f115": Phase="Pending", Reason="", readiness=false. Elapsed: 4.59224774s Aug 14 14:17:38.294: INFO: Pod "client-containers-8aacc4e6-a4f9-4ead-a5ad-60fbce23f115": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.599690849s STEP: Saw pod success Aug 14 14:17:38.294: INFO: Pod "client-containers-8aacc4e6-a4f9-4ead-a5ad-60fbce23f115" satisfied condition "Succeeded or Failed" Aug 14 14:17:38.300: INFO: Trying to get logs from node kali-worker pod client-containers-8aacc4e6-a4f9-4ead-a5ad-60fbce23f115 container test-container: STEP: delete the pod Aug 14 14:17:38.670: INFO: Waiting for pod client-containers-8aacc4e6-a4f9-4ead-a5ad-60fbce23f115 to disappear Aug 14 14:17:38.720: INFO: Pod client-containers-8aacc4e6-a4f9-4ead-a5ad-60fbce23f115 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:17:38.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2962" for this suite. • [SLOW TEST:7.246 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":203,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:17:38.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-4652 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 14 14:17:39.095: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 14 14:17:39.520: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 14 14:17:42.142: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 14 14:17:43.880: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 14 14:17:45.526: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:17:48.160: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:17:50.078: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:17:51.528: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:17:53.570: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:17:55.907: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:17:57.527: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:17:59.525: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:18:01.527: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 14 14:18:01.536: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 14 14:18:07.585: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.15:8080/dial?request=hostname&protocol=http&host=10.244.2.162&port=8080&tries=1'] Namespace:pod-network-test-4652 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:18:07.585: INFO: >>> kubeConfig: /root/.kube/config I0814 14:18:07.658588 10 log.go:172] (0x400281c580) (0x400199dae0) Create stream I0814 14:18:07.659008 10 log.go:172] (0x400281c580) (0x400199dae0) Stream added, broadcasting: 1 I0814 14:18:07.694187 10 log.go:172] (0x400281c580) Reply frame received for 1 I0814 14:18:07.694911 10 log.go:172] (0x400281c580) (0x4001e08280) Create stream I0814 14:18:07.694989 10 log.go:172] (0x400281c580) (0x4001e08280) Stream added, broadcasting: 3 I0814 14:18:07.696767 10 log.go:172] (0x400281c580) Reply frame received for 3 I0814 14:18:07.697017 10 log.go:172] (0x400281c580) (0x4001a12000) Create stream I0814 14:18:07.697079 10 log.go:172] (0x400281c580) (0x4001a12000) Stream added, broadcasting: 5 I0814 14:18:07.698204 10 log.go:172] (0x400281c580) Reply frame received for 5 I0814 14:18:07.838234 10 log.go:172] (0x400281c580) Data frame received for 5 I0814 14:18:07.838646 10 log.go:172] (0x400281c580) Data frame received for 3 I0814 14:18:07.838886 10 log.go:172] (0x4001e08280) (3) Data frame handling I0814 14:18:07.839159 10 log.go:172] (0x4001a12000) (5) Data frame handling I0814 14:18:07.839956 10 log.go:172] (0x4001e08280) (3) Data frame sent I0814 14:18:07.840199 10 log.go:172] (0x400281c580) Data frame received for 1 I0814 14:18:07.840292 10 log.go:172] (0x400199dae0) (1) Data frame handling I0814 14:18:07.840384 10 log.go:172] (0x400281c580) Data frame received for 3 I0814 14:18:07.840472 10 log.go:172] (0x4001e08280) (3) Data frame handling I0814 14:18:07.840649 10 log.go:172] (0x400199dae0) (1) Data frame sent I0814 14:18:07.843524 10 log.go:172] (0x400281c580) (0x400199dae0) Stream removed, broadcasting: 1 I0814 14:18:07.845480 10 log.go:172] (0x400281c580) Go away received I0814 14:18:07.847113 10 log.go:172] (0x400281c580) (0x400199dae0) Stream removed, broadcasting: 1 I0814 14:18:07.847441 10 log.go:172] (0x400281c580) (0x4001e08280) Stream removed, broadcasting: 3 I0814 14:18:07.847632 10 log.go:172] (0x400281c580) (0x4001a12000) Stream removed, broadcasting: 5 Aug 14 14:18:07.848: INFO: Waiting for responses: map[] Aug 14 14:18:07.853: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.15:8080/dial?request=hostname&protocol=http&host=10.244.1.14&port=8080&tries=1'] Namespace:pod-network-test-4652 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:18:07.854: INFO: >>> kubeConfig: /root/.kube/config I0814 14:18:07.908034 10 log.go:172] (0x40024324d0) (0x4002392460) Create stream I0814 14:18:07.908170 10 log.go:172] (0x40024324d0) (0x4002392460) Stream added, broadcasting: 1 I0814 14:18:07.911399 10 log.go:172] (0x40024324d0) Reply frame received for 1 I0814 14:18:07.911580 10 log.go:172] (0x40024324d0) (0x4001733a40) Create stream I0814 14:18:07.911674 10 log.go:172] (0x40024324d0) (0x4001733a40) Stream added, broadcasting: 3 I0814 14:18:07.913615 10 log.go:172] (0x40024324d0) Reply frame received for 3 I0814 14:18:07.913758 10 log.go:172] (0x40024324d0) (0x4001adf360) Create stream I0814 14:18:07.913838 10 log.go:172] (0x40024324d0) (0x4001adf360) Stream added, broadcasting: 5 I0814 14:18:07.915388 10 log.go:172] (0x40024324d0) Reply frame received for 5 I0814 14:18:07.990662 10 log.go:172] (0x40024324d0) Data frame received for 3 I0814 14:18:07.990895 10 log.go:172] (0x4001733a40) (3) Data frame handling I0814 14:18:07.990986 10 log.go:172] (0x4001733a40) (3) Data frame sent I0814 14:18:07.991074 10 log.go:172] (0x40024324d0) Data frame received for 5 I0814 14:18:07.991216 10 log.go:172] (0x4001adf360) (5) Data frame handling I0814 14:18:07.991313 10 log.go:172] (0x40024324d0) Data frame received for 3 I0814 14:18:07.991424 10 log.go:172] (0x4001733a40) (3) Data frame handling I0814 14:18:07.992443 10 log.go:172] (0x40024324d0) Data frame received for 1 I0814 14:18:07.992544 10 log.go:172] (0x4002392460) (1) Data frame handling I0814 14:18:07.992681 10 log.go:172] (0x4002392460) (1) Data frame sent I0814 14:18:07.992933 10 log.go:172] (0x40024324d0) (0x4002392460) Stream removed, broadcasting: 1 I0814 14:18:07.993136 10 log.go:172] (0x40024324d0) Go away received I0814 14:18:07.993340 10 log.go:172] (0x40024324d0) (0x4002392460) Stream removed, broadcasting: 1 I0814 14:18:07.993463 10 log.go:172] (0x40024324d0) (0x4001733a40) Stream removed, broadcasting: 3 I0814 14:18:07.993568 10 log.go:172] (0x40024324d0) (0x4001adf360) Stream removed, broadcasting: 5 Aug 14 14:18:07.993: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:18:07.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4652" for this suite. • [SLOW TEST:29.255 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":206,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:18:08.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 14:18:11.309: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 14:18:13.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:18:16.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:18:18.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011491, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 14:18:21.010: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:18:33.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9246" for this suite. STEP: Destroying namespace "webhook-9246-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:28.404 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":17,"skipped":226,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:18:36.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:18:37.216: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-a3cc7410-5d08-4dae-b1e2-ba2ac89e03a4" in namespace "security-context-test-6290" to be "Succeeded or Failed" Aug 14 14:18:37.537: INFO: Pod "busybox-readonly-false-a3cc7410-5d08-4dae-b1e2-ba2ac89e03a4": Phase="Pending", Reason="", readiness=false. Elapsed: 320.653875ms Aug 14 14:18:39.675: INFO: Pod "busybox-readonly-false-a3cc7410-5d08-4dae-b1e2-ba2ac89e03a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.458465748s Aug 14 14:18:41.681: INFO: Pod "busybox-readonly-false-a3cc7410-5d08-4dae-b1e2-ba2ac89e03a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.464336745s Aug 14 14:18:43.826: INFO: Pod "busybox-readonly-false-a3cc7410-5d08-4dae-b1e2-ba2ac89e03a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.609799005s Aug 14 14:18:45.834: INFO: Pod "busybox-readonly-false-a3cc7410-5d08-4dae-b1e2-ba2ac89e03a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.617299103s Aug 14 14:18:45.834: INFO: Pod "busybox-readonly-false-a3cc7410-5d08-4dae-b1e2-ba2ac89e03a4" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:18:45.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6290" for this suite. • [SLOW TEST:9.439 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:18:45.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Aug 14 14:18:46.129: INFO: PodSpec: initContainers in spec.initContainers Aug 14 14:19:47.102: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-45867386-0217-4956-8ae6-e8ece5223c6f", GenerateName:"", Namespace:"init-container-8802", SelfLink:"/api/v1/namespaces/init-container-8802/pods/pod-init-45867386-0217-4956-8ae6-e8ece5223c6f", UID:"6a55ffab-8a9a-4f80-98ef-48bb7a1f5999", ResourceVersion:"9536573", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733011526, loc:(*time.Location)(0x747e900)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"128192202"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x40036e40a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40036e40c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x40036e40e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40036e4100)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gx4fd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x4004ce8000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gx4fd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gx4fd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gx4fd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40061b8098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40027f4000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x40061b8120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x40061b8150)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x40061b8158), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x40061b815c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011526, loc:(*time.Location)(0x747e900)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011526, loc:(*time.Location)(0x747e900)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011526, loc:(*time.Location)(0x747e900)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011526, loc:(*time.Location)(0x747e900)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.13", PodIP:"10.244.2.167", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.167"}}, StartTime:(*v1.Time)(0x40036e4120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x40036e4160), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x40027f40e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://736daab603eb07b9d17ccd83f59460aa8c5711130a5911af77d1840a66692ee9", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x40036e4180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x40036e4140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0x40061b81df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:19:47.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8802" for this suite. • [SLOW TEST:61.390 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":19,"skipped":294,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:19:47.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Aug 14 14:19:47.446: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:19:48.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7746" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":20,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:19:48.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:19:49.626: INFO: Creating deployment "webserver-deployment" Aug 14 14:19:49.633: INFO: Waiting for observed generation 1 Aug 14 14:19:52.101: INFO: Waiting for all required pods to come up Aug 14 14:19:52.132: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 14 14:20:12.151: INFO: Waiting for deployment "webserver-deployment" to complete Aug 14 14:20:12.163: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 14 14:20:12.179: INFO: Updating deployment webserver-deployment Aug 14 14:20:12.179: INFO: Waiting for observed generation 2 Aug 14 14:20:15.006: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 14 14:20:15.886: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 14 14:20:16.241: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 14 14:20:17.732: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 14 14:20:17.732: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 14 14:20:18.634: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 14 14:20:19.640: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 14 14:20:19.641: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 14 14:20:19.942: INFO: Updating deployment webserver-deployment Aug 14 14:20:19.942: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 14 14:20:20.213: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 14 14:20:27.176: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 14 14:20:28.739: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9320 /apis/apps/v1/namespaces/deployment-9320/deployments/webserver-deployment a90be0ce-4874-4d5d-93d6-59d0ea836260 9536943 3 2020-08-14 14:19:49 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-14 14:20:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-14 14:20:25 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003f04cc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-14 14:20:20 +0000 UTC,LastTransitionTime:2020-08-14 14:20:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-08-14 14:20:25 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 14 14:20:29.601: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-9320 /apis/apps/v1/namespaces/deployment-9320/replicasets/webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 9536937 3 2020-08-14 14:20:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment a90be0ce-4874-4d5d-93d6-59d0ea836260 0x4003f05137 0x4003f05138}] [] [{kube-controller-manager Update apps/v1 2020-08-14 14:20:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 48 98 101 48 99 101 45 52 56 55 52 45 52 100 53 100 45 57 51 100 54 45 53 57 100 48 101 97 56 51 54 50 54 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003f051b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 14 14:20:29.602: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 14 14:20:29.603: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-9320 /apis/apps/v1/namespaces/deployment-9320/replicasets/webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 9536931 3 2020-08-14 14:19:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment a90be0ce-4874-4d5d-93d6-59d0ea836260 0x4003f05217 0x4003f05218}] [] [{kube-controller-manager Update apps/v1 2020-08-14 14:20:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 48 98 101 48 99 101 45 52 56 55 52 45 52 100 53 100 45 57 51 100 54 45 53 57 100 48 101 97 56 51 54 50 54 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003f05288 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 14 14:20:29.861: INFO: Pod "webserver-deployment-6676bcd6d4-bgmz5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bgmz5 webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-bgmz5 eb3bead1-a17a-4315-9fa9-37fa5b111ec8 9536945 0 2020-08-14 14:20:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x40040054a7 0x40040054a8}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:26 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.24,StartTime:2020-08-14 14:20:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.862: INFO: Pod "webserver-deployment-6676bcd6d4-c59fl" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-c59fl webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-c59fl c303b959-d6df-4c01-91ea-1bd308ad1337 9536942 0 2020-08-14 14:20:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x40040056a7 0x40040056a8}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:25 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.175,StartTime:2020-08-14 14:20:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.175,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.864: INFO: Pod "webserver-deployment-6676bcd6d4-dlnc5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dlnc5 webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-dlnc5 02f2ad9f-00d8-4572-8977-9de05f483316 9536902 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x4004005887 0x4004005888}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.865: INFO: Pod "webserver-deployment-6676bcd6d4-f2j6j" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-f2j6j webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-f2j6j 3139ff8a-cd35-4abe-8cd1-aeb2befd30cb 9536892 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x40040059d7 0x40040059d8}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.866: INFO: Pod "webserver-deployment-6676bcd6d4-jv9wh" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jv9wh webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-jv9wh fffe0079-303e-4c25-8970-c715c57ee442 9536843 0 2020-08-14 14:20:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x4004005b17 0x4004005b18}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:18 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.22,StartTime:2020-08-14 14:20:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.867: INFO: Pod "webserver-deployment-6676bcd6d4-mhhfd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mhhfd webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-mhhfd 446f619e-0f43-482f-8272-b66a314b532b 9536917 0 2020-08-14 14:20:21 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x4004005d37 0x4004005d38}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.869: INFO: Pod "webserver-deployment-6676bcd6d4-nchht" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nchht webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-nchht 4043fd32-0b58-4743-827f-28ebad4a1fe1 9536959 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x40041f40c7 0x40041f40c8}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:28 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-14 14:20:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.870: INFO: Pod "webserver-deployment-6676bcd6d4-p7nt5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-p7nt5 webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-p7nt5 537d20ae-733c-4fc7-b2eb-ac2b751a01d4 9536908 0 2020-08-14 14:20:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x40041f4277 0x40041f4278}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.23,StartTime:2020-08-14 14:20:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.871: INFO: Pod "webserver-deployment-6676bcd6d4-pgvhd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pgvhd webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-pgvhd d70bf3cb-3221-4788-9bc5-b67a7d62fb21 9536890 0 2020-08-14 14:20:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x40041f4457 0x40041f4458}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 55 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.174,StartTime:2020-08-14 14:20:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.873: INFO: Pod "webserver-deployment-6676bcd6d4-pmcmh" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pmcmh webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-pmcmh bd8e2f99-3b4a-4dfa-ba40-1c095ca5673f 9536900 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x40041f4637 0x40041f4638}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.874: INFO: Pod "webserver-deployment-6676bcd6d4-wnl77" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wnl77 webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-wnl77 e6f0024f-57f5-46f6-854e-d85443ffbe61 9536866 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x40041f4777 0x40041f4778}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.875: INFO: Pod "webserver-deployment-6676bcd6d4-xmjlt" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-xmjlt webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-xmjlt c5229915-3719-40c5-b025-c117038e43f5 9536967 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x40041f48b7 0x40041f48b8}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:29 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-14 14:20:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.876: INFO: Pod "webserver-deployment-6676bcd6d4-xsdqb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-xsdqb webserver-deployment-6676bcd6d4- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-6676bcd6d4-xsdqb 48eea601-8a4f-4b8f-be35-ae4b75c25f38 9536869 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b83f11b9-bbd2-4b54-b244-e1f74152061b 0x40041f4a67 0x40041f4a68}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 56 51 102 49 49 98 57 45 98 98 100 50 45 52 98 53 52 45 98 50 52 52 45 101 49 102 55 52 49 53 50 48 54 49 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.877: INFO: Pod "webserver-deployment-84855cf797-29g5f" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-29g5f webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-29g5f 1be7bcd8-f8ed-4440-a0d6-f8d3cbb6337f 9536954 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f4ba7 0x40041f4ba8}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:28 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-14 14:20:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.878: INFO: Pod "webserver-deployment-84855cf797-2hhgf" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2hhgf webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-2hhgf faa9e1f8-329d-4b9b-a8ac-c995b0eb4010 9536893 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f4d37 0x40041f4d38}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.880: INFO: Pod "webserver-deployment-84855cf797-2zm82" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2zm82 webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-2zm82 786c9bc2-6480-41b2-8cd9-d60a031f4a77 9536755 0 2020-08-14 14:19:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f4e67 0x40041f4e68}] [] [{kube-controller-manager Update v1 2020-08-14 14:19:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.20,StartTime:2020-08-14 14:19:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 14:20:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0e183111c9ac0c81153221f2fa568eaacd2a34bcaca44daf666ce94d50c7e554,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.881: INFO: Pod "webserver-deployment-84855cf797-5468k" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5468k webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-5468k e7bab332-b48b-4fa2-99a1-8611f93b0501 9536920 0 2020-08-14 14:20:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f5017 0x40041f5018}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.882: INFO: Pod "webserver-deployment-84855cf797-5dtkf" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5dtkf webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-5dtkf 1178168f-a911-4e24-9f00-d660f5ce2fa2 9536905 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f5147 0x40041f5148}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.883: INFO: Pod "webserver-deployment-84855cf797-5f6hk" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5f6hk webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-5f6hk 4f299aa4-32d3-4a02-80c9-21ca062888d3 9536891 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f5277 0x40041f5278}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.884: INFO: Pod "webserver-deployment-84855cf797-5rwl8" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5rwl8 webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-5rwl8 75abb8e6-d3da-4085-8db5-cae43948477b 9536703 0 2020-08-14 14:19:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f53b7 0x40041f53b8}] [] [{kube-controller-manager Update v1 2020-08-14 14:19:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:01 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.17,StartTime:2020-08-14 14:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 14:20:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://966cc92dbe0bd086124b41849cf88082384f7d4795faa818bb66880b7d228851,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.885: INFO: Pod "webserver-deployment-84855cf797-8lqz5" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8lqz5 webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-8lqz5 a559c17a-8c53-4422-9b82-ac3b6fdf6c70 9536883 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f5567 0x40041f5568}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.887: INFO: Pod "webserver-deployment-84855cf797-fptxx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fptxx webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-fptxx 11f429da-84cd-4cf5-83e8-0f3159422f7c 9536966 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f5697 0x40041f5698}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:29 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-14 14:20:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.888: INFO: Pod "webserver-deployment-84855cf797-fqdjk" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fqdjk webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-fqdjk c8cb3aa8-3ff7-4a9c-84f3-0d83b16912e0 9536716 0 2020-08-14 14:19:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f5827 0x40041f5828}] [] [{kube-controller-manager Update v1 2020-08-14 14:19:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 55 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.170,StartTime:2020-08-14 14:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 14:20:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://03d4ab3448740662fb0b3913e1c37f84b03f326b604f382be509d52e9949ab15,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.170,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.889: INFO: Pod "webserver-deployment-84855cf797-g2g5h" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-g2g5h webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-g2g5h 149b99b4-b373-428b-b7aa-e5e24e659791 9536922 0 2020-08-14 14:20:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f59e7 0x40041f59e8}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.891: INFO: Pod "webserver-deployment-84855cf797-jbxzj" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jbxzj webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-jbxzj c8717006-45d5-408e-a4a9-d953caecc73f 9536738 0 2020-08-14 14:19:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f5b27 0x40041f5b28}] [] [{kube-controller-manager Update v1 2020-08-14 14:19:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 55 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.172,StartTime:2020-08-14 14:19:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 14:20:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e905cc816e17b00a7debf59cd5202c713cecda84676b7d3462ba55351815ae89,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.172,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.892: INFO: Pod "webserver-deployment-84855cf797-jl4s8" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jl4s8 webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-jl4s8 e9b4e77f-986e-4e67-a710-19660a7d9e9f 9536673 0 2020-08-14 14:19:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f5ce7 0x40041f5ce8}] [] [{kube-controller-manager Update v1 2020-08-14 14:19:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:19:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.169,StartTime:2020-08-14 14:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 14:19:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a7ac602b3404907e4ccd05963f1af1afc3d6d6f66a34bb1d38f9842f6752229e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.895: INFO: Pod "webserver-deployment-84855cf797-qpx6s" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-qpx6s webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-qpx6s 8afd50cc-4f2a-477e-814a-9c1e3b2eb1a9 9536713 0 2020-08-14 14:19:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x40041f5ea7 0x40041f5ea8}] [] [{kube-controller-manager Update v1 2020-08-14 14:19:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 55 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.171,StartTime:2020-08-14 14:19:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 14:20:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3faa237e6d3e1ba65ca45a8ae64d7998d0a22ca5b58825f8c82a67204d111e97,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.171,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.896: INFO: Pod "webserver-deployment-84855cf797-rrlb6" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rrlb6 webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-rrlb6 7cda82e0-bc5c-4637-97e0-1c715602edeb 9536916 0 2020-08-14 14:20:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x4003f78057 0x4003f78058}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.898: INFO: Pod "webserver-deployment-84855cf797-rzlnz" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rzlnz webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-rzlnz 5057cad4-a302-45fa-a724-3bc4865be771 9536748 0 2020-08-14 14:19:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x4003f78187 0x4003f78188}] [] [{kube-controller-manager Update v1 2020-08-14 14:19:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 55 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.173,StartTime:2020-08-14 14:19:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 14:20:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e96edc86dcc558446d695af7c5893934a761be61cfe27df8237b289d76b4d6ee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.173,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.900: INFO: Pod "webserver-deployment-84855cf797-tqspx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tqspx webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-tqspx 0d63e5bf-8493-4930-b909-ba1cbe9dfad4 9536903 0 2020-08-14 14:20:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x4003f78337 0x4003f78338}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.901: INFO: Pod "webserver-deployment-84855cf797-xrspl" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xrspl webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-xrspl 4ab5eeb6-4929-4e6d-a8f3-6f0b10cbfd23 9536919 0 2020-08-14 14:20:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x4003f78467 0x4003f78468}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.903: INFO: Pod "webserver-deployment-84855cf797-xsl8w" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xsl8w webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-xsl8w bc3cea3d-30a4-4a29-8202-602206c4dfa9 9536737 0 2020-08-14 14:19:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x4003f78597 0x4003f78598}] [] [{kube-controller-manager Update v1 2020-08-14 14:19:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:20:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.18,StartTime:2020-08-14 14:19:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 14:20:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8a45ddad4b2410ef5a030101c93405324ec662ef6ae4422f2170c1826f471f66,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:20:29.904: INFO: Pod "webserver-deployment-84855cf797-z6mdv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-z6mdv webserver-deployment-84855cf797- deployment-9320 /api/v1/namespaces/deployment-9320/pods/webserver-deployment-84855cf797-z6mdv c3396e0e-21f0-40a2-be5e-394ca4dd2dab 9536918 0 2020-08-14 14:20:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 fa50abfb-013f-4084-b87c-7acfe67b87c8 0x4003f78757 0x4003f78758}] [] [{kube-controller-manager Update v1 2020-08-14 14:20:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 97 53 48 97 98 102 98 45 48 49 51 102 45 52 48 56 52 45 98 56 55 99 45 55 97 99 102 101 54 55 98 56 55 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x2n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x2n5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x2n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:20:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:20:29.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9320" for this suite. • [SLOW TEST:41.715 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":21,"skipped":368,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:20:30.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 14 14:21:14.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:14.312: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:16.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:17.423: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:18.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:18.385: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:20.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:20.953: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:22.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:22.477: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:24.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:24.517: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:26.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:26.673: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:28.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:28.551: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:30.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:30.425: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:32.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:32.482: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:34.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:34.509: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:36.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:36.323: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:38.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:38.460: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:40.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:40.331: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:42.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:42.319: INFO: Pod pod-with-poststart-exec-hook still exists Aug 14 14:21:44.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 14 14:21:44.320: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:21:44.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5120" for this suite. • [SLOW TEST:74.344 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":368,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:21:44.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 14 14:21:55.276: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:21:55.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-565" for this suite. • [SLOW TEST:10.518 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":390,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:21:55.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:21:55.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1899" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":24,"skipped":404,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:21:55.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 14:22:00.032: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 14:22:02.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011720, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011720, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011720, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011719, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:22:04.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011720, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011720, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011720, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011719, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 14:22:08.930: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:22:12.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1685" for this suite. STEP: Destroying namespace "webhook-1685-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.513 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":25,"skipped":404,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:22:15.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:22:19.058: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 14 14:22:39.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9210 create -f -' Aug 14 14:22:45.720: INFO: stderr: "" Aug 14 14:22:45.721: INFO: stdout: "e2e-test-crd-publish-openapi-5646-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 14 14:22:45.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9210 delete e2e-test-crd-publish-openapi-5646-crds test-cr' Aug 14 14:22:47.500: INFO: stderr: "" Aug 14 14:22:47.501: INFO: stdout: "e2e-test-crd-publish-openapi-5646-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 14 14:22:47.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9210 apply -f -' Aug 14 14:22:49.090: INFO: stderr: "" Aug 14 14:22:49.091: INFO: stdout: "e2e-test-crd-publish-openapi-5646-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 14 14:22:49.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9210 delete e2e-test-crd-publish-openapi-5646-crds test-cr' Aug 14 14:22:50.386: INFO: stderr: "" Aug 14 14:22:50.386: INFO: stdout: "e2e-test-crd-publish-openapi-5646-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 14 14:22:50.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5646-crds' Aug 14 14:22:51.973: INFO: stderr: "" Aug 14 14:22:51.974: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5646-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:23:02.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9210" for this suite. • [SLOW TEST:47.088 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":26,"skipped":405,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:23:02.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 14 14:23:09.621: INFO: Successfully updated pod "pod-update-9924fc91-9696-4c75-9daa-2209ff47c0f5" STEP: verifying the updated pod is in kubernetes Aug 14 14:23:09.632: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:23:09.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8890" for this suite. • [SLOW TEST:7.136 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:23:09.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:23:09.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config version' Aug 14 14:23:11.771: INFO: stderr: "" Aug 14 14:23:11.771: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.5\", GitCommit:\"e6503f8d8f769ace2f338794c914a96fc335df0f\", GitTreeState:\"clean\", BuildDate:\"2020-08-14T08:46:50Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.4\", GitCommit:\"c96aede7b5205121079932896c4ad89bb93260af\", GitTreeState:\"clean\", BuildDate:\"2020-06-20T01:49:49Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:23:11.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2040" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":28,"skipped":461,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:23:12.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 14:23:16.512: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 14:23:18.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:23:20.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:23:22.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733011796, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 14:23:25.719: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:23:26.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1564" for this suite. STEP: Destroying namespace "webhook-1564-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.680 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":29,"skipped":472,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:23:27.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:23:27.695: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 14 14:23:32.702: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 14 14:23:32.703: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 14 14:23:42.596: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1634 /apis/apps/v1/namespaces/deployment-1634/deployments/test-cleanup-deployment d2f20990-7503-44d5-88ff-2a23dfe36359 9538204 1 2020-08-14 14:23:32 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-08-14 14:23:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-14 14:23:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003a65b08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-14 14:23:32 +0000 UTC,LastTransitionTime:2020-08-14 14:23:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-b4867b47f" has successfully progressed.,LastUpdateTime:2020-08-14 14:23:40 +0000 UTC,LastTransitionTime:2020-08-14 14:23:32 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 14 14:23:43.053: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-1634 /apis/apps/v1/namespaces/deployment-1634/replicasets/test-cleanup-deployment-b4867b47f 079646a4-658e-4ded-b47a-fbd3fdeeee88 9538192 1 2020-08-14 14:23:32 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment d2f20990-7503-44d5-88ff-2a23dfe36359 0x4003a65fe0 0x4003a65fe1}] [] [{kube-controller-manager Update apps/v1 2020-08-14 14:23:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 50 102 50 48 57 57 48 45 55 53 48 51 45 52 52 100 53 45 56 56 102 102 45 50 97 50 51 100 102 101 51 54 51 53 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003aaa078 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 14 14:23:43.062: INFO: Pod "test-cleanup-deployment-b4867b47f-rrpgf" is available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-rrpgf test-cleanup-deployment-b4867b47f- deployment-1634 /api/v1/namespaces/deployment-1634/pods/test-cleanup-deployment-b4867b47f-rrpgf b526ed3c-eecc-473c-9dff-8eb3cdc02338 9538191 0 2020-08-14 14:23:32 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 079646a4-658e-4ded-b47a-fbd3fdeeee88 0x4003aaa560 0x4003aaa561}] [] [{kube-controller-manager Update v1 2020-08-14 14:23:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 55 57 54 52 54 97 52 45 54 53 56 101 45 52 100 101 100 45 98 52 55 97 45 102 98 100 51 102 100 101 101 101 101 56 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 14:23:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5hf2x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5hf2x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5hf2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:23:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:23:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:23:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 14:23:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.195,StartTime:2020-08-14 14:23:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 14:23:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://5c9bdf87be8dcf2da5c80450ce87b8f647e0ed584136525b7e1b037127224357,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.195,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:23:43.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1634" for this suite. • [SLOW TEST:16.053 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":30,"skipped":505,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:23:43.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-6df03222-44de-451f-8e9c-8ea221ad9bb5 STEP: Creating a pod to test consume secrets Aug 14 14:23:44.925: INFO: Waiting up to 5m0s for pod "pod-secrets-c0c2c0b4-5aec-463f-bffc-e79ca8f364d0" in namespace "secrets-5728" to be "Succeeded or Failed" Aug 14 14:23:44.934: INFO: Pod "pod-secrets-c0c2c0b4-5aec-463f-bffc-e79ca8f364d0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743849ms Aug 14 14:23:47.050: INFO: Pod "pod-secrets-c0c2c0b4-5aec-463f-bffc-e79ca8f364d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124113699s Aug 14 14:23:49.115: INFO: Pod "pod-secrets-c0c2c0b4-5aec-463f-bffc-e79ca8f364d0": Phase="Running", Reason="", readiness=true. Elapsed: 4.189404917s Aug 14 14:23:51.123: INFO: Pod "pod-secrets-c0c2c0b4-5aec-463f-bffc-e79ca8f364d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.196972683s STEP: Saw pod success Aug 14 14:23:51.123: INFO: Pod "pod-secrets-c0c2c0b4-5aec-463f-bffc-e79ca8f364d0" satisfied condition "Succeeded or Failed" Aug 14 14:23:51.128: INFO: Trying to get logs from node kali-worker pod pod-secrets-c0c2c0b4-5aec-463f-bffc-e79ca8f364d0 container secret-volume-test: STEP: delete the pod Aug 14 14:23:51.211: INFO: Waiting for pod pod-secrets-c0c2c0b4-5aec-463f-bffc-e79ca8f364d0 to disappear Aug 14 14:23:51.226: INFO: Pod pod-secrets-c0c2c0b4-5aec-463f-bffc-e79ca8f364d0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:23:51.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5728" for this suite. STEP: Destroying namespace "secret-namespace-6869" for this suite. • [SLOW TEST:8.183 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":554,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:23:51.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-92092b3b-d637-4bae-a3dc-ed3f9c4719d5 STEP: Creating secret with name s-test-opt-upd-393dd3ca-00f7-4fed-a37b-867b090071f2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-92092b3b-d637-4bae-a3dc-ed3f9c4719d5 STEP: Updating secret s-test-opt-upd-393dd3ca-00f7-4fed-a37b-867b090071f2 STEP: Creating secret with name s-test-opt-create-8e140ae0-96da-462a-91dc-c50fd3123611 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:25:11.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3057" for this suite. • [SLOW TEST:80.281 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":583,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:25:11.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 14 14:25:11.710: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:11.749: INFO: Number of nodes with available pods: 0 Aug 14 14:25:11.750: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:12.760: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:12.766: INFO: Number of nodes with available pods: 0 Aug 14 14:25:12.766: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:13.901: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:13.907: INFO: Number of nodes with available pods: 0 Aug 14 14:25:13.907: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:14.759: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:14.776: INFO: Number of nodes with available pods: 0 Aug 14 14:25:14.776: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:15.757: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:15.762: INFO: Number of nodes with available pods: 2 Aug 14 14:25:15.763: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 14 14:25:15.841: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:15.847: INFO: Number of nodes with available pods: 1 Aug 14 14:25:15.847: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:16.865: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:17.075: INFO: Number of nodes with available pods: 1 Aug 14 14:25:17.075: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:17.860: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:17.867: INFO: Number of nodes with available pods: 1 Aug 14 14:25:17.867: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:18.857: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:18.864: INFO: Number of nodes with available pods: 1 Aug 14 14:25:18.864: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:19.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:19.864: INFO: Number of nodes with available pods: 1 Aug 14 14:25:19.864: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:20.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:20.865: INFO: Number of nodes with available pods: 1 Aug 14 14:25:20.865: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:21.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:21.863: INFO: Number of nodes with available pods: 1 Aug 14 14:25:21.863: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:22.857: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:22.861: INFO: Number of nodes with available pods: 1 Aug 14 14:25:22.861: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:23.945: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:24.502: INFO: Number of nodes with available pods: 1 Aug 14 14:25:24.502: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:25.206: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:25.416: INFO: Number of nodes with available pods: 1 Aug 14 14:25:25.416: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:25.857: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:25.864: INFO: Number of nodes with available pods: 1 Aug 14 14:25:25.864: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:26.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:26.866: INFO: Number of nodes with available pods: 1 Aug 14 14:25:26.866: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:27.859: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:27.866: INFO: Number of nodes with available pods: 1 Aug 14 14:25:27.866: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:25:28.860: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:25:28.866: INFO: Number of nodes with available pods: 2 Aug 14 14:25:28.866: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1561, will wait for the garbage collector to delete the pods Aug 14 14:25:28.939: INFO: Deleting DaemonSet.extensions daemon-set took: 10.887223ms Aug 14 14:25:29.241: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.670882ms Aug 14 14:25:44.461: INFO: Number of nodes with available pods: 0 Aug 14 14:25:44.461: INFO: Number of running nodes: 0, number of available pods: 0 Aug 14 14:25:45.182: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1561/daemonsets","resourceVersion":"9538831"},"items":null} Aug 14 14:25:46.645: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1561/pods","resourceVersion":"9538834"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:25:46.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1561" for this suite. • [SLOW TEST:35.388 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":33,"skipped":603,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:25:46.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-5892 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5892 to expose endpoints map[] Aug 14 14:25:49.976: INFO: successfully validated that service endpoint-test2 in namespace services-5892 exposes endpoints map[] (637.545883ms elapsed) STEP: Creating pod pod1 in namespace services-5892 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5892 to expose endpoints map[pod1:[80]] Aug 14 14:25:56.979: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (6.223027337s elapsed, will retry) Aug 14 14:26:03.023: INFO: successfully validated that service endpoint-test2 in namespace services-5892 exposes endpoints map[pod1:[80]] (12.267327034s elapsed) STEP: Creating pod pod2 in namespace services-5892 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5892 to expose endpoints map[pod1:[80] pod2:[80]] Aug 14 14:26:07.860: INFO: successfully validated that service endpoint-test2 in namespace services-5892 exposes endpoints map[pod1:[80] pod2:[80]] (4.830183823s elapsed) STEP: Deleting pod pod1 in namespace services-5892 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5892 to expose endpoints map[pod2:[80]] Aug 14 14:26:07.920: INFO: successfully validated that service endpoint-test2 in namespace services-5892 exposes endpoints map[pod2:[80]] (51.153147ms elapsed) STEP: Deleting pod pod2 in namespace services-5892 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5892 to expose endpoints map[] Aug 14 14:26:07.945: INFO: successfully validated that service endpoint-test2 in namespace services-5892 exposes endpoints map[] (17.931441ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:26:08.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5892" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:21.803 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":34,"skipped":617,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:26:08.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6486 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Aug 14 14:26:09.795: INFO: Found 0 stateful pods, waiting for 3 Aug 14 14:26:19.806: INFO: Found 2 stateful pods, waiting for 3 Aug 14 14:26:29.808: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 14 14:26:29.809: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 14 14:26:29.809: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 14 14:26:29.855: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 14 14:26:39.938: INFO: Updating stateful set ss2 Aug 14 14:26:40.000: INFO: Waiting for Pod statefulset-6486/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 14 14:26:50.013: INFO: Waiting for Pod statefulset-6486/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Aug 14 14:27:03.261: INFO: Found 2 stateful pods, waiting for 3 Aug 14 14:27:13.328: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 14 14:27:13.328: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 14 14:27:13.328: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 14 14:27:23.269: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 14 14:27:23.269: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 14 14:27:23.269: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 14 14:27:23.301: INFO: Updating stateful set ss2 Aug 14 14:27:23.348: INFO: Waiting for Pod statefulset-6486/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 14 14:27:33.378: INFO: Updating stateful set ss2 Aug 14 14:27:33.509: INFO: Waiting for StatefulSet statefulset-6486/ss2 to complete update Aug 14 14:27:33.510: INFO: Waiting for Pod statefulset-6486/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 14 14:27:43.524: INFO: Deleting all statefulset in ns statefulset-6486 Aug 14 14:27:43.531: INFO: Scaling statefulset ss2 to 0 Aug 14 14:28:13.619: INFO: Waiting for statefulset status.replicas updated to 0 Aug 14 14:28:13.629: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:28:13.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6486" for this suite. • [SLOW TEST:124.921 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":35,"skipped":629,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:28:13.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:28:13.954: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:28:15.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7535" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":36,"skipped":636,"failed":0} SSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:28:15.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 14 14:28:15.660: INFO: Created pod &Pod{ObjectMeta:{dns-8805 dns-8805 /api/v1/namespaces/dns-8805/pods/dns-8805 e180c3ad-8851-40f7-9b4a-bc0b1b5025ac 9539687 0 2020-08-14 14:28:15 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-08-14 14:28:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5tr7n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5tr7n,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5tr7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 14 14:28:15.883: INFO: The status of Pod dns-8805 is Pending, waiting for it to be Running (with Ready = true) Aug 14 14:28:17.889: INFO: The status of Pod dns-8805 is Pending, waiting for it to be Running (with Ready = true) Aug 14 14:28:19.888: INFO: The status of Pod dns-8805 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Aug 14 14:28:19.889: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8805 PodName:dns-8805 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:28:19.889: INFO: >>> kubeConfig: /root/.kube/config I0814 14:28:19.954857 10 log.go:172] (0x40029d4580) (0x4002755720) Create stream I0814 14:28:19.954983 10 log.go:172] (0x40029d4580) (0x4002755720) Stream added, broadcasting: 1 I0814 14:28:19.958821 10 log.go:172] (0x40029d4580) Reply frame received for 1 I0814 14:28:19.959014 10 log.go:172] (0x40029d4580) (0x4001e865a0) Create stream I0814 14:28:19.959085 10 log.go:172] (0x40029d4580) (0x4001e865a0) Stream added, broadcasting: 3 I0814 14:28:19.960306 10 log.go:172] (0x40029d4580) Reply frame received for 3 I0814 14:28:19.960397 10 log.go:172] (0x40029d4580) (0x4001d93ae0) Create stream I0814 14:28:19.960448 10 log.go:172] (0x40029d4580) (0x4001d93ae0) Stream added, broadcasting: 5 I0814 14:28:19.961541 10 log.go:172] (0x40029d4580) Reply frame received for 5 I0814 14:28:20.046185 10 log.go:172] (0x40029d4580) Data frame received for 3 I0814 14:28:20.046283 10 log.go:172] (0x4001e865a0) (3) Data frame handling I0814 14:28:20.046378 10 log.go:172] (0x4001e865a0) (3) Data frame sent I0814 14:28:20.047279 10 log.go:172] (0x40029d4580) Data frame received for 3 I0814 14:28:20.047333 10 log.go:172] (0x4001e865a0) (3) Data frame handling I0814 14:28:20.047418 10 log.go:172] (0x40029d4580) Data frame received for 5 I0814 14:28:20.047504 10 log.go:172] (0x4001d93ae0) (5) Data frame handling I0814 14:28:20.048565 10 log.go:172] (0x40029d4580) Data frame received for 1 I0814 14:28:20.048628 10 log.go:172] (0x4002755720) (1) Data frame handling I0814 14:28:20.048695 10 log.go:172] (0x4002755720) (1) Data frame sent I0814 14:28:20.048817 10 log.go:172] (0x40029d4580) (0x4002755720) Stream removed, broadcasting: 1 I0814 14:28:20.049166 10 log.go:172] (0x40029d4580) Go away received I0814 14:28:20.049340 10 log.go:172] (0x40029d4580) (0x4002755720) Stream removed, broadcasting: 1 I0814 14:28:20.049416 10 log.go:172] (0x40029d4580) (0x4001e865a0) Stream removed, broadcasting: 3 I0814 14:28:20.049464 10 log.go:172] (0x40029d4580) (0x4001d93ae0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Aug 14 14:28:20.049: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8805 PodName:dns-8805 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:28:20.049: INFO: >>> kubeConfig: /root/.kube/config I0814 14:28:20.094039 10 log.go:172] (0x4002c326e0) (0x4001edc0a0) Create stream I0814 14:28:20.094134 10 log.go:172] (0x4002c326e0) (0x4001edc0a0) Stream added, broadcasting: 1 I0814 14:28:20.096480 10 log.go:172] (0x4002c326e0) Reply frame received for 1 I0814 14:28:20.096619 10 log.go:172] (0x4002c326e0) (0x4002286b40) Create stream I0814 14:28:20.096692 10 log.go:172] (0x4002c326e0) (0x4002286b40) Stream added, broadcasting: 3 I0814 14:28:20.097745 10 log.go:172] (0x4002c326e0) Reply frame received for 3 I0814 14:28:20.097814 10 log.go:172] (0x4002c326e0) (0x4001edc140) Create stream I0814 14:28:20.097856 10 log.go:172] (0x4002c326e0) (0x4001edc140) Stream added, broadcasting: 5 I0814 14:28:20.098747 10 log.go:172] (0x4002c326e0) Reply frame received for 5 I0814 14:28:20.168615 10 log.go:172] (0x4002c326e0) Data frame received for 3 I0814 14:28:20.168813 10 log.go:172] (0x4002286b40) (3) Data frame handling I0814 14:28:20.168920 10 log.go:172] (0x4002286b40) (3) Data frame sent I0814 14:28:20.170144 10 log.go:172] (0x4002c326e0) Data frame received for 3 I0814 14:28:20.170230 10 log.go:172] (0x4002286b40) (3) Data frame handling I0814 14:28:20.170372 10 log.go:172] (0x4002c326e0) Data frame received for 5 I0814 14:28:20.170443 10 log.go:172] (0x4001edc140) (5) Data frame handling I0814 14:28:20.171583 10 log.go:172] (0x4002c326e0) Data frame received for 1 I0814 14:28:20.171655 10 log.go:172] (0x4001edc0a0) (1) Data frame handling I0814 14:28:20.171723 10 log.go:172] (0x4001edc0a0) (1) Data frame sent I0814 14:28:20.171815 10 log.go:172] (0x4002c326e0) (0x4001edc0a0) Stream removed, broadcasting: 1 I0814 14:28:20.172073 10 log.go:172] (0x4002c326e0) (0x4001edc0a0) Stream removed, broadcasting: 1 I0814 14:28:20.172139 10 log.go:172] (0x4002c326e0) (0x4002286b40) Stream removed, broadcasting: 3 I0814 14:28:20.172200 10 log.go:172] (0x4002c326e0) (0x4001edc140) Stream removed, broadcasting: 5 I0814 14:28:20.172364 10 log.go:172] (0x4002c326e0) Go away received Aug 14 14:28:20.172: INFO: Deleting pod dns-8805... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:28:20.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8805" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":37,"skipped":640,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:28:20.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-652e2651-b82d-4909-809b-8566697a40f4 STEP: Creating a pod to test consume secrets Aug 14 14:28:20.819: INFO: Waiting up to 5m0s for pod "pod-secrets-16fa9046-6c97-4381-a515-3508d2d21232" in namespace "secrets-819" to be "Succeeded or Failed" Aug 14 14:28:20.889: INFO: Pod "pod-secrets-16fa9046-6c97-4381-a515-3508d2d21232": Phase="Pending", Reason="", readiness=false. Elapsed: 69.513249ms Aug 14 14:28:23.236: INFO: Pod "pod-secrets-16fa9046-6c97-4381-a515-3508d2d21232": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416765348s Aug 14 14:28:25.275: INFO: Pod "pod-secrets-16fa9046-6c97-4381-a515-3508d2d21232": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.455368014s STEP: Saw pod success Aug 14 14:28:25.275: INFO: Pod "pod-secrets-16fa9046-6c97-4381-a515-3508d2d21232" satisfied condition "Succeeded or Failed" Aug 14 14:28:25.278: INFO: Trying to get logs from node kali-worker pod pod-secrets-16fa9046-6c97-4381-a515-3508d2d21232 container secret-volume-test: STEP: delete the pod Aug 14 14:28:25.365: INFO: Waiting for pod pod-secrets-16fa9046-6c97-4381-a515-3508d2d21232 to disappear Aug 14 14:28:25.661: INFO: Pod pod-secrets-16fa9046-6c97-4381-a515-3508d2d21232 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:28:25.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-819" for this suite. • [SLOW TEST:5.364 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":642,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:28:25.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:28:33.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3440" for this suite. • [SLOW TEST:8.250 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":39,"skipped":645,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:28:33.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 14:28:38.435: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 14:28:40.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012118, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012118, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012118, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012117, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:28:42.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012118, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012118, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012118, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012117, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 14:28:45.602: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:28:46.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6254" for this suite. STEP: Destroying namespace "webhook-6254-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.363 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":40,"skipped":666,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:28:46.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-9ffe9612-bed2-44c6-a3dc-007c3cf2b9ce STEP: Creating a pod to test consume configMaps Aug 14 14:28:46.466: INFO: Waiting up to 5m0s for pod "pod-configmaps-83238cea-d039-4020-a5a2-c9feae573a52" in namespace "configmap-5781" to be "Succeeded or Failed" Aug 14 14:28:46.662: INFO: Pod "pod-configmaps-83238cea-d039-4020-a5a2-c9feae573a52": Phase="Pending", Reason="", readiness=false. Elapsed: 195.032358ms Aug 14 14:28:48.739: INFO: Pod "pod-configmaps-83238cea-d039-4020-a5a2-c9feae573a52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272868598s Aug 14 14:28:50.745: INFO: Pod "pod-configmaps-83238cea-d039-4020-a5a2-c9feae573a52": Phase="Running", Reason="", readiness=true. Elapsed: 4.278674405s Aug 14 14:28:52.752: INFO: Pod "pod-configmaps-83238cea-d039-4020-a5a2-c9feae573a52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.285293427s STEP: Saw pod success Aug 14 14:28:52.752: INFO: Pod "pod-configmaps-83238cea-d039-4020-a5a2-c9feae573a52" satisfied condition "Succeeded or Failed" Aug 14 14:28:52.757: INFO: Trying to get logs from node kali-worker pod pod-configmaps-83238cea-d039-4020-a5a2-c9feae573a52 container configmap-volume-test: STEP: delete the pod Aug 14 14:28:52.806: INFO: Waiting for pod pod-configmaps-83238cea-d039-4020-a5a2-c9feae573a52 to disappear Aug 14 14:28:52.828: INFO: Pod pod-configmaps-83238cea-d039-4020-a5a2-c9feae573a52 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:28:52.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5781" for this suite. • [SLOW TEST:6.551 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":666,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:28:52.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:28:59.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7806" for this suite. STEP: Destroying namespace "nsdeletetest-4497" for this suite. Aug 14 14:28:59.221: INFO: Namespace nsdeletetest-4497 was already deleted STEP: Destroying namespace "nsdeletetest-8549" for this suite. • [SLOW TEST:6.380 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":42,"skipped":682,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:28:59.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:29:11.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-912" for this suite. • [SLOW TEST:12.121 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":43,"skipped":721,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:29:11.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 14:29:11.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-927b3a8b-60d5-4d6e-998d-db39af103a2e" in namespace "downward-api-5484" to be "Succeeded or Failed" Aug 14 14:29:11.711: INFO: Pod "downwardapi-volume-927b3a8b-60d5-4d6e-998d-db39af103a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 135.399453ms Aug 14 14:29:13.718: INFO: Pod "downwardapi-volume-927b3a8b-60d5-4d6e-998d-db39af103a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142024136s Aug 14 14:29:15.726: INFO: Pod "downwardapi-volume-927b3a8b-60d5-4d6e-998d-db39af103a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149776605s Aug 14 14:29:17.788: INFO: Pod "downwardapi-volume-927b3a8b-60d5-4d6e-998d-db39af103a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212419447s Aug 14 14:29:19.970: INFO: Pod "downwardapi-volume-927b3a8b-60d5-4d6e-998d-db39af103a2e": Phase="Running", Reason="", readiness=true. Elapsed: 8.394283149s Aug 14 14:29:22.031: INFO: Pod "downwardapi-volume-927b3a8b-60d5-4d6e-998d-db39af103a2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.454735315s STEP: Saw pod success Aug 14 14:29:22.031: INFO: Pod "downwardapi-volume-927b3a8b-60d5-4d6e-998d-db39af103a2e" satisfied condition "Succeeded or Failed" Aug 14 14:29:22.340: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-927b3a8b-60d5-4d6e-998d-db39af103a2e container client-container: STEP: delete the pod Aug 14 14:29:22.633: INFO: Waiting for pod downwardapi-volume-927b3a8b-60d5-4d6e-998d-db39af103a2e to disappear Aug 14 14:29:22.645: INFO: Pod downwardapi-volume-927b3a8b-60d5-4d6e-998d-db39af103a2e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:29:22.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5484" for this suite. • [SLOW TEST:11.314 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":734,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:29:22.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Aug 14 14:29:23.244: INFO: namespace kubectl-1917 Aug 14 14:29:23.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1917' Aug 14 14:29:25.141: INFO: stderr: "" Aug 14 14:29:25.141: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 14 14:29:26.151: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 14:29:26.151: INFO: Found 0 / 1 Aug 14 14:29:27.151: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 14:29:27.151: INFO: Found 0 / 1 Aug 14 14:29:28.402: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 14:29:28.402: INFO: Found 0 / 1 Aug 14 14:29:29.153: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 14:29:29.154: INFO: Found 1 / 1 Aug 14 14:29:29.154: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 14 14:29:29.228: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 14:29:29.228: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 14 14:29:29.229: INFO: wait on agnhost-master startup in kubectl-1917 Aug 14 14:29:29.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs agnhost-master-fskhs agnhost-master --namespace=kubectl-1917' Aug 14 14:29:30.507: INFO: stderr: "" Aug 14 14:29:30.508: INFO: stdout: "Paused\n" STEP: exposing RC Aug 14 14:29:30.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1917' Aug 14 14:29:32.576: INFO: stderr: "" Aug 14 14:29:32.576: INFO: stdout: "service/rm2 exposed\n" Aug 14 14:29:32.850: INFO: Service rm2 in namespace kubectl-1917 found. STEP: exposing service Aug 14 14:29:34.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1917' Aug 14 14:29:36.466: INFO: stderr: "" Aug 14 14:29:36.466: INFO: stdout: "service/rm3 exposed\n" Aug 14 14:29:36.594: INFO: Service rm3 in namespace kubectl-1917 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:29:38.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1917" for this suite. • [SLOW TEST:15.952 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":45,"skipped":745,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:29:38.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0814 14:29:41.997685 10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 14 14:29:41.999: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:29:41.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5808" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":46,"skipped":751,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:29:42.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 14 14:29:42.146: INFO: Waiting up to 5m0s for pod "pod-9cec14b6-7d3e-4f35-9968-75d10895b219" in namespace "emptydir-4441" to be "Succeeded or Failed" Aug 14 14:29:42.201: INFO: Pod "pod-9cec14b6-7d3e-4f35-9968-75d10895b219": Phase="Pending", Reason="", readiness=false. Elapsed: 55.021467ms Aug 14 14:29:44.210: INFO: Pod "pod-9cec14b6-7d3e-4f35-9968-75d10895b219": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063502863s Aug 14 14:29:46.356: INFO: Pod "pod-9cec14b6-7d3e-4f35-9968-75d10895b219": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209498619s Aug 14 14:29:48.403: INFO: Pod "pod-9cec14b6-7d3e-4f35-9968-75d10895b219": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.256967296s STEP: Saw pod success Aug 14 14:29:48.403: INFO: Pod "pod-9cec14b6-7d3e-4f35-9968-75d10895b219" satisfied condition "Succeeded or Failed" Aug 14 14:29:48.549: INFO: Trying to get logs from node kali-worker pod pod-9cec14b6-7d3e-4f35-9968-75d10895b219 container test-container: STEP: delete the pod Aug 14 14:29:48.706: INFO: Waiting for pod pod-9cec14b6-7d3e-4f35-9968-75d10895b219 to disappear Aug 14 14:29:48.742: INFO: Pod pod-9cec14b6-7d3e-4f35-9968-75d10895b219 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:29:48.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4441" for this suite. • [SLOW TEST:6.750 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":761,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:29:48.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 14 14:29:48.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6631' Aug 14 14:29:50.165: INFO: stderr: "" Aug 14 14:29:50.165: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Aug 14 14:29:50.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6631' Aug 14 14:29:55.927: INFO: stderr: "" Aug 14 14:29:55.927: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:29:55.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6631" for this suite. • [SLOW TEST:8.135 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":48,"skipped":781,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:29:56.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3818.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3818.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3818.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3818.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3818.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3818.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 14 14:30:12.482: INFO: DNS probes using dns-3818/dns-test-cc2f1aec-2805-4f2f-9e80-0b90b6822c59 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:30:12.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3818" for this suite. • [SLOW TEST:15.664 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":49,"skipped":795,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:30:12.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-b6a0d423-b5ad-45a5-99ec-71977e1f57a8 STEP: Creating a pod to test consume configMaps Aug 14 14:30:13.455: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bd7d8581-fa91-43d3-b95d-b4801403b858" in namespace "projected-8477" to be "Succeeded or Failed" Aug 14 14:30:13.468: INFO: Pod "pod-projected-configmaps-bd7d8581-fa91-43d3-b95d-b4801403b858": Phase="Pending", Reason="", readiness=false. Elapsed: 13.43438ms Aug 14 14:30:15.474: INFO: Pod "pod-projected-configmaps-bd7d8581-fa91-43d3-b95d-b4801403b858": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019628484s Aug 14 14:30:17.645: INFO: Pod "pod-projected-configmaps-bd7d8581-fa91-43d3-b95d-b4801403b858": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190529243s Aug 14 14:30:19.914: INFO: Pod "pod-projected-configmaps-bd7d8581-fa91-43d3-b95d-b4801403b858": Phase="Pending", Reason="", readiness=false. Elapsed: 6.459507336s Aug 14 14:30:21.921: INFO: Pod "pod-projected-configmaps-bd7d8581-fa91-43d3-b95d-b4801403b858": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.465953482s STEP: Saw pod success Aug 14 14:30:21.921: INFO: Pod "pod-projected-configmaps-bd7d8581-fa91-43d3-b95d-b4801403b858" satisfied condition "Succeeded or Failed" Aug 14 14:30:21.926: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-bd7d8581-fa91-43d3-b95d-b4801403b858 container projected-configmap-volume-test: STEP: delete the pod Aug 14 14:30:21.969: INFO: Waiting for pod pod-projected-configmaps-bd7d8581-fa91-43d3-b95d-b4801403b858 to disappear Aug 14 14:30:21.976: INFO: Pod pod-projected-configmaps-bd7d8581-fa91-43d3-b95d-b4801403b858 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:30:21.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8477" for this suite. • [SLOW TEST:9.416 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":824,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:30:21.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-e779debd-dbfb-4724-b28e-a4459788bb52 STEP: Creating a pod to test consume configMaps Aug 14 14:30:22.094: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d347b125-48e6-4baa-97e9-8a8dd71328c1" in namespace "projected-5380" to be "Succeeded or Failed" Aug 14 14:30:22.103: INFO: Pod "pod-projected-configmaps-d347b125-48e6-4baa-97e9-8a8dd71328c1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.885541ms Aug 14 14:30:24.110: INFO: Pod "pod-projected-configmaps-d347b125-48e6-4baa-97e9-8a8dd71328c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015522498s Aug 14 14:30:26.117: INFO: Pod "pod-projected-configmaps-d347b125-48e6-4baa-97e9-8a8dd71328c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021899673s Aug 14 14:30:28.125: INFO: Pod "pod-projected-configmaps-d347b125-48e6-4baa-97e9-8a8dd71328c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030021131s STEP: Saw pod success Aug 14 14:30:28.125: INFO: Pod "pod-projected-configmaps-d347b125-48e6-4baa-97e9-8a8dd71328c1" satisfied condition "Succeeded or Failed" Aug 14 14:30:28.130: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-d347b125-48e6-4baa-97e9-8a8dd71328c1 container projected-configmap-volume-test: STEP: delete the pod Aug 14 14:30:28.180: INFO: Waiting for pod pod-projected-configmaps-d347b125-48e6-4baa-97e9-8a8dd71328c1 to disappear Aug 14 14:30:28.297: INFO: Pod pod-projected-configmaps-d347b125-48e6-4baa-97e9-8a8dd71328c1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:30:28.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5380" for this suite. • [SLOW TEST:6.322 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":840,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:30:28.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:30:28.463: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:30:29.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8306" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":52,"skipped":857,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:30:29.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 14:30:29.696: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7a10af4-08b9-445f-81f3-37cded83e301" in namespace "projected-2434" to be "Succeeded or Failed" Aug 14 14:30:29.709: INFO: Pod "downwardapi-volume-a7a10af4-08b9-445f-81f3-37cded83e301": Phase="Pending", Reason="", readiness=false. Elapsed: 12.841859ms Aug 14 14:30:32.001: INFO: Pod "downwardapi-volume-a7a10af4-08b9-445f-81f3-37cded83e301": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304647811s Aug 14 14:30:34.235: INFO: Pod "downwardapi-volume-a7a10af4-08b9-445f-81f3-37cded83e301": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538807448s Aug 14 14:30:36.243: INFO: Pod "downwardapi-volume-a7a10af4-08b9-445f-81f3-37cded83e301": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.546595148s STEP: Saw pod success Aug 14 14:30:36.243: INFO: Pod "downwardapi-volume-a7a10af4-08b9-445f-81f3-37cded83e301" satisfied condition "Succeeded or Failed" Aug 14 14:30:36.249: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a7a10af4-08b9-445f-81f3-37cded83e301 container client-container: STEP: delete the pod Aug 14 14:30:36.341: INFO: Waiting for pod downwardapi-volume-a7a10af4-08b9-445f-81f3-37cded83e301 to disappear Aug 14 14:30:36.426: INFO: Pod downwardapi-volume-a7a10af4-08b9-445f-81f3-37cded83e301 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:30:36.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2434" for this suite. • [SLOW TEST:6.934 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":883,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:30:36.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 14:30:36.603: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21fb0510-197c-47c8-a602-a227e30da125" in namespace "projected-7263" to be "Succeeded or Failed" Aug 14 14:30:36.657: INFO: Pod "downwardapi-volume-21fb0510-197c-47c8-a602-a227e30da125": Phase="Pending", Reason="", readiness=false. Elapsed: 54.386109ms Aug 14 14:30:38.665: INFO: Pod "downwardapi-volume-21fb0510-197c-47c8-a602-a227e30da125": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061651801s Aug 14 14:30:40.687: INFO: Pod "downwardapi-volume-21fb0510-197c-47c8-a602-a227e30da125": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084370499s Aug 14 14:30:42.813: INFO: Pod "downwardapi-volume-21fb0510-197c-47c8-a602-a227e30da125": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.210182218s STEP: Saw pod success Aug 14 14:30:42.814: INFO: Pod "downwardapi-volume-21fb0510-197c-47c8-a602-a227e30da125" satisfied condition "Succeeded or Failed" Aug 14 14:30:42.897: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-21fb0510-197c-47c8-a602-a227e30da125 container client-container: STEP: delete the pod Aug 14 14:30:44.484: INFO: Waiting for pod downwardapi-volume-21fb0510-197c-47c8-a602-a227e30da125 to disappear Aug 14 14:30:44.873: INFO: Pod downwardapi-volume-21fb0510-197c-47c8-a602-a227e30da125 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:30:44.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7263" for this suite. • [SLOW TEST:8.770 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":887,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:30:45.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Aug 14 14:30:46.419: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:32:26.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2093" for this suite. • [SLOW TEST:101.797 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":55,"skipped":889,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:32:27.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Aug 14 14:32:27.967: INFO: Waiting up to 5m0s for pod "pod-92c4a36b-827d-4a9f-93b2-c254acdc44d1" in namespace "emptydir-9105" to be "Succeeded or Failed" Aug 14 14:32:27.982: INFO: Pod "pod-92c4a36b-827d-4a9f-93b2-c254acdc44d1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.488639ms Aug 14 14:32:30.097: INFO: Pod "pod-92c4a36b-827d-4a9f-93b2-c254acdc44d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130346242s Aug 14 14:32:32.104: INFO: Pod "pod-92c4a36b-827d-4a9f-93b2-c254acdc44d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137390282s Aug 14 14:32:34.113: INFO: Pod "pod-92c4a36b-827d-4a9f-93b2-c254acdc44d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146077237s STEP: Saw pod success Aug 14 14:32:34.113: INFO: Pod "pod-92c4a36b-827d-4a9f-93b2-c254acdc44d1" satisfied condition "Succeeded or Failed" Aug 14 14:32:34.118: INFO: Trying to get logs from node kali-worker2 pod pod-92c4a36b-827d-4a9f-93b2-c254acdc44d1 container test-container: STEP: delete the pod Aug 14 14:32:34.279: INFO: Waiting for pod pod-92c4a36b-827d-4a9f-93b2-c254acdc44d1 to disappear Aug 14 14:32:34.343: INFO: Pod pod-92c4a36b-827d-4a9f-93b2-c254acdc44d1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:32:34.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9105" for this suite. • [SLOW TEST:7.481 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":929,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:32:34.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:33:02.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4967" for this suite. • [SLOW TEST:28.216 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":57,"skipped":942,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:33:02.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 14 14:33:03.479: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1922 /api/v1/namespaces/watch-1922/configmaps/e2e-watch-test-watch-closed d367fefa-10cd-40a4-9675-5f7669630e95 9541846 0 2020-08-14 14:33:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-14 14:33:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 14 14:33:03.481: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1922 /api/v1/namespaces/watch-1922/configmaps/e2e-watch-test-watch-closed d367fefa-10cd-40a4-9675-5f7669630e95 9541847 0 2020-08-14 14:33:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-14 14:33:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 14 14:33:03.549: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1922 /api/v1/namespaces/watch-1922/configmaps/e2e-watch-test-watch-closed d367fefa-10cd-40a4-9675-5f7669630e95 9541848 0 2020-08-14 14:33:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-14 14:33:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 14 14:33:03.551: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1922 /api/v1/namespaces/watch-1922/configmaps/e2e-watch-test-watch-closed d367fefa-10cd-40a4-9675-5f7669630e95 9541849 0 2020-08-14 14:33:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-14 14:33:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:33:03.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1922" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":58,"skipped":961,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:33:03.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-491 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-491 STEP: creating replication controller externalsvc in namespace services-491 I0814 14:33:04.890313 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-491, replica count: 2 I0814 14:33:07.941839 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:33:10.942646 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:33:13.943354 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Aug 14 14:33:14.035: INFO: Creating new exec pod Aug 14 14:33:18.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-491 execpodvx8rd -- /bin/sh -x -c nslookup nodeport-service' Aug 14 14:33:24.067: INFO: stderr: "I0814 14:33:23.914872 400 log.go:172] (0x40007320b0) (0x40008955e0) Create stream\nI0814 14:33:23.918446 400 log.go:172] (0x40007320b0) (0x40008955e0) Stream added, broadcasting: 1\nI0814 14:33:23.929191 400 log.go:172] (0x40007320b0) Reply frame received for 1\nI0814 14:33:23.930347 400 log.go:172] (0x40007320b0) (0x400072e000) Create stream\nI0814 14:33:23.930467 400 log.go:172] (0x40007320b0) (0x400072e000) Stream added, broadcasting: 3\nI0814 14:33:23.933521 400 log.go:172] (0x40007320b0) Reply frame received for 3\nI0814 14:33:23.933781 400 log.go:172] (0x40007320b0) (0x40008d1220) Create stream\nI0814 14:33:23.933857 400 log.go:172] (0x40007320b0) (0x40008d1220) Stream added, broadcasting: 5\nI0814 14:33:23.934950 400 log.go:172] (0x40007320b0) Reply frame received for 5\nI0814 14:33:23.978512 400 log.go:172] (0x40007320b0) Data frame received for 5\nI0814 14:33:23.978771 400 log.go:172] (0x40008d1220) (5) Data frame handling\nI0814 14:33:23.979436 400 log.go:172] (0x40008d1220) (5) Data frame sent\n+ nslookup nodeport-service\nI0814 14:33:24.045210 400 log.go:172] (0x40007320b0) Data frame received for 3\nI0814 14:33:24.045450 400 log.go:172] (0x400072e000) (3) Data frame handling\nI0814 14:33:24.045646 400 log.go:172] (0x400072e000) (3) Data frame sent\nI0814 14:33:24.045833 400 log.go:172] (0x40007320b0) Data frame received for 3\nI0814 14:33:24.045989 400 log.go:172] (0x400072e000) (3) Data frame handling\nI0814 14:33:24.046149 400 log.go:172] (0x400072e000) (3) Data frame sent\nI0814 14:33:24.046321 400 log.go:172] (0x40007320b0) Data frame received for 3\nI0814 14:33:24.046456 400 log.go:172] (0x400072e000) (3) Data frame handling\nI0814 14:33:24.047150 400 log.go:172] (0x40007320b0) Data frame received for 5\nI0814 14:33:24.047422 400 log.go:172] (0x40008d1220) (5) Data frame handling\nI0814 14:33:24.047940 400 log.go:172] (0x40007320b0) Data frame received for 1\nI0814 14:33:24.048074 400 log.go:172] (0x40008955e0) (1) Data frame handling\nI0814 14:33:24.048213 400 log.go:172] (0x40008955e0) (1) Data frame sent\nI0814 14:33:24.049708 400 log.go:172] (0x40007320b0) (0x40008955e0) Stream removed, broadcasting: 1\nI0814 14:33:24.052807 400 log.go:172] (0x40007320b0) Go away received\nI0814 14:33:24.056060 400 log.go:172] (0x40007320b0) (0x40008955e0) Stream removed, broadcasting: 1\nI0814 14:33:24.056522 400 log.go:172] (0x40007320b0) (0x400072e000) Stream removed, broadcasting: 3\nI0814 14:33:24.056889 400 log.go:172] (0x40007320b0) (0x40008d1220) Stream removed, broadcasting: 5\n" Aug 14 14:33:24.067: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-491.svc.cluster.local\tcanonical name = externalsvc.services-491.svc.cluster.local.\nName:\texternalsvc.services-491.svc.cluster.local\nAddress: 10.105.147.73\n\n" STEP: deleting ReplicationController externalsvc in namespace services-491, will wait for the garbage collector to delete the pods Aug 14 14:33:24.132: INFO: Deleting ReplicationController externalsvc took: 7.978083ms Aug 14 14:33:24.232: INFO: Terminating ReplicationController externalsvc pods took: 100.900587ms Aug 14 14:33:33.866: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:33:34.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-491" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:30.652 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":59,"skipped":964,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:33:34.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-4a68d237-d0eb-45d1-b856-d5737f7b6d0b [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:33:34.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6470" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":60,"skipped":995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:33:34.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 14 14:33:35.007: INFO: Waiting up to 5m0s for pod "downward-api-aa837a1c-b589-4e79-b76b-de593a7795c4" in namespace "downward-api-9332" to be "Succeeded or Failed" Aug 14 14:33:35.200: INFO: Pod "downward-api-aa837a1c-b589-4e79-b76b-de593a7795c4": Phase="Pending", Reason="", readiness=false. Elapsed: 192.76101ms Aug 14 14:33:37.208: INFO: Pod "downward-api-aa837a1c-b589-4e79-b76b-de593a7795c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200733583s Aug 14 14:33:39.414: INFO: Pod "downward-api-aa837a1c-b589-4e79-b76b-de593a7795c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.407444894s Aug 14 14:33:41.474: INFO: Pod "downward-api-aa837a1c-b589-4e79-b76b-de593a7795c4": Phase="Running", Reason="", readiness=true. Elapsed: 6.466630096s Aug 14 14:33:44.307: INFO: Pod "downward-api-aa837a1c-b589-4e79-b76b-de593a7795c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.300177949s STEP: Saw pod success Aug 14 14:33:44.307: INFO: Pod "downward-api-aa837a1c-b589-4e79-b76b-de593a7795c4" satisfied condition "Succeeded or Failed" Aug 14 14:33:44.313: INFO: Trying to get logs from node kali-worker pod downward-api-aa837a1c-b589-4e79-b76b-de593a7795c4 container dapi-container: STEP: delete the pod Aug 14 14:33:44.871: INFO: Waiting for pod downward-api-aa837a1c-b589-4e79-b76b-de593a7795c4 to disappear Aug 14 14:33:44.881: INFO: Pod downward-api-aa837a1c-b589-4e79-b76b-de593a7795c4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:33:44.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9332" for this suite. • [SLOW TEST:10.196 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":1050,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:33:44.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 14 14:33:46.831: INFO: Waiting up to 5m0s for pod "pod-cbaa6f6f-35d6-4343-8ea1-33b2c31a6c47" in namespace "emptydir-7585" to be "Succeeded or Failed" Aug 14 14:33:47.347: INFO: Pod "pod-cbaa6f6f-35d6-4343-8ea1-33b2c31a6c47": Phase="Pending", Reason="", readiness=false. Elapsed: 515.909904ms Aug 14 14:33:49.354: INFO: Pod "pod-cbaa6f6f-35d6-4343-8ea1-33b2c31a6c47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.522898142s Aug 14 14:33:51.659: INFO: Pod "pod-cbaa6f6f-35d6-4343-8ea1-33b2c31a6c47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.827719226s Aug 14 14:33:53.780: INFO: Pod "pod-cbaa6f6f-35d6-4343-8ea1-33b2c31a6c47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.948639782s STEP: Saw pod success Aug 14 14:33:53.781: INFO: Pod "pod-cbaa6f6f-35d6-4343-8ea1-33b2c31a6c47" satisfied condition "Succeeded or Failed" Aug 14 14:33:53.786: INFO: Trying to get logs from node kali-worker pod pod-cbaa6f6f-35d6-4343-8ea1-33b2c31a6c47 container test-container: STEP: delete the pod Aug 14 14:33:54.094: INFO: Waiting for pod pod-cbaa6f6f-35d6-4343-8ea1-33b2c31a6c47 to disappear Aug 14 14:33:54.210: INFO: Pod pod-cbaa6f6f-35d6-4343-8ea1-33b2c31a6c47 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:33:54.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7585" for this suite. • [SLOW TEST:9.390 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1054,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:33:54.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:33:56.647: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 14 14:33:56.684: INFO: Number of nodes with available pods: 0 Aug 14 14:33:56.685: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 14 14:33:57.820: INFO: Number of nodes with available pods: 0 Aug 14 14:33:57.820: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:33:59.239: INFO: Number of nodes with available pods: 0 Aug 14 14:33:59.240: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:33:59.863: INFO: Number of nodes with available pods: 0 Aug 14 14:33:59.864: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:00.849: INFO: Number of nodes with available pods: 0 Aug 14 14:34:00.849: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:01.995: INFO: Number of nodes with available pods: 0 Aug 14 14:34:01.995: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:02.827: INFO: Number of nodes with available pods: 0 Aug 14 14:34:02.827: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:03.825: INFO: Number of nodes with available pods: 1 Aug 14 14:34:03.825: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 14 14:34:03.907: INFO: Number of nodes with available pods: 1 Aug 14 14:34:03.907: INFO: Number of running nodes: 0, number of available pods: 1 Aug 14 14:34:04.922: INFO: Number of nodes with available pods: 0 Aug 14 14:34:04.922: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 14 14:34:04.956: INFO: Number of nodes with available pods: 0 Aug 14 14:34:04.956: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:06.049: INFO: Number of nodes with available pods: 0 Aug 14 14:34:06.050: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:06.963: INFO: Number of nodes with available pods: 0 Aug 14 14:34:06.964: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:07.964: INFO: Number of nodes with available pods: 0 Aug 14 14:34:07.965: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:08.963: INFO: Number of nodes with available pods: 0 Aug 14 14:34:08.963: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:10.538: INFO: Number of nodes with available pods: 0 Aug 14 14:34:10.538: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:11.064: INFO: Number of nodes with available pods: 0 Aug 14 14:34:11.065: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:11.965: INFO: Number of nodes with available pods: 0 Aug 14 14:34:11.965: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:12.988: INFO: Number of nodes with available pods: 1 Aug 14 14:34:12.988: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-579, will wait for the garbage collector to delete the pods Aug 14 14:34:13.062: INFO: Deleting DaemonSet.extensions daemon-set took: 8.463538ms Aug 14 14:34:13.363: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.58056ms Aug 14 14:34:23.468: INFO: Number of nodes with available pods: 0 Aug 14 14:34:23.468: INFO: Number of running nodes: 0, number of available pods: 0 Aug 14 14:34:23.471: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-579/daemonsets","resourceVersion":"9542482"},"items":null} Aug 14 14:34:23.473: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-579/pods","resourceVersion":"9542482"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:34:23.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-579" for this suite. • [SLOW TEST:29.536 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":63,"skipped":1074,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:34:23.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:34:24.113: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 14 14:34:24.137: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:24.149: INFO: Number of nodes with available pods: 0 Aug 14 14:34:24.149: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:25.364: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:25.424: INFO: Number of nodes with available pods: 0 Aug 14 14:34:25.424: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:26.159: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:26.166: INFO: Number of nodes with available pods: 0 Aug 14 14:34:26.166: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:27.162: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:27.171: INFO: Number of nodes with available pods: 0 Aug 14 14:34:27.172: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:28.186: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:28.254: INFO: Number of nodes with available pods: 0 Aug 14 14:34:28.254: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:29.164: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:29.169: INFO: Number of nodes with available pods: 1 Aug 14 14:34:29.170: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:30.163: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:30.171: INFO: Number of nodes with available pods: 2 Aug 14 14:34:30.171: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 14 14:34:30.222: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:30.222: INFO: Wrong image for pod: daemon-set-rmnn7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:30.255: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:31.265: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:31.265: INFO: Wrong image for pod: daemon-set-rmnn7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:31.276: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:32.265: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:32.265: INFO: Wrong image for pod: daemon-set-rmnn7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:32.273: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:33.348: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:33.348: INFO: Wrong image for pod: daemon-set-rmnn7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:33.349: INFO: Pod daemon-set-rmnn7 is not available Aug 14 14:34:33.356: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:34.263: INFO: Pod daemon-set-8gx4x is not available Aug 14 14:34:34.263: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:34.288: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:35.265: INFO: Pod daemon-set-8gx4x is not available Aug 14 14:34:35.265: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:35.275: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:36.264: INFO: Pod daemon-set-8gx4x is not available Aug 14 14:34:36.265: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:36.275: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:37.264: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:37.274: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:38.265: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:38.331: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:39.289: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:39.289: INFO: Pod daemon-set-bdbgt is not available Aug 14 14:34:39.297: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:40.265: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:40.266: INFO: Pod daemon-set-bdbgt is not available Aug 14 14:34:40.297: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:41.264: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:41.264: INFO: Pod daemon-set-bdbgt is not available Aug 14 14:34:41.273: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:42.265: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:42.265: INFO: Pod daemon-set-bdbgt is not available Aug 14 14:34:42.275: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:43.266: INFO: Wrong image for pod: daemon-set-bdbgt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 14 14:34:43.266: INFO: Pod daemon-set-bdbgt is not available Aug 14 14:34:43.277: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:44.530: INFO: Pod daemon-set-cpbzl is not available Aug 14 14:34:44.737: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 14 14:34:44.765: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:44.776: INFO: Number of nodes with available pods: 1 Aug 14 14:34:44.776: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:45.790: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:45.798: INFO: Number of nodes with available pods: 1 Aug 14 14:34:45.798: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:46.887: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:46.925: INFO: Number of nodes with available pods: 1 Aug 14 14:34:46.925: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:47.790: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:47.797: INFO: Number of nodes with available pods: 1 Aug 14 14:34:47.797: INFO: Node kali-worker is running more than one daemon pod Aug 14 14:34:48.809: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 14:34:48.816: INFO: Number of nodes with available pods: 2 Aug 14 14:34:48.816: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1497, will wait for the garbage collector to delete the pods Aug 14 14:34:48.935: INFO: Deleting DaemonSet.extensions daemon-set took: 37.289788ms Aug 14 14:34:49.236: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.944173ms Aug 14 14:35:03.441: INFO: Number of nodes with available pods: 0 Aug 14 14:35:03.441: INFO: Number of running nodes: 0, number of available pods: 0 Aug 14 14:35:03.446: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1497/daemonsets","resourceVersion":"9542726"},"items":null} Aug 14 14:35:03.450: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1497/pods","resourceVersion":"9542726"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:35:03.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1497" for this suite. • [SLOW TEST:39.672 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":64,"skipped":1108,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:35:03.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 14 14:35:03.642: INFO: Waiting up to 5m0s for pod "downward-api-bd795e67-bd39-4e8f-91ee-f8ea62d32554" in namespace "downward-api-2627" to be "Succeeded or Failed" Aug 14 14:35:03.652: INFO: Pod "downward-api-bd795e67-bd39-4e8f-91ee-f8ea62d32554": Phase="Pending", Reason="", readiness=false. Elapsed: 9.768663ms Aug 14 14:35:05.811: INFO: Pod "downward-api-bd795e67-bd39-4e8f-91ee-f8ea62d32554": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168547264s Aug 14 14:35:07.819: INFO: Pod "downward-api-bd795e67-bd39-4e8f-91ee-f8ea62d32554": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1764518s STEP: Saw pod success Aug 14 14:35:07.819: INFO: Pod "downward-api-bd795e67-bd39-4e8f-91ee-f8ea62d32554" satisfied condition "Succeeded or Failed" Aug 14 14:35:07.824: INFO: Trying to get logs from node kali-worker pod downward-api-bd795e67-bd39-4e8f-91ee-f8ea62d32554 container dapi-container: STEP: delete the pod Aug 14 14:35:07.847: INFO: Waiting for pod downward-api-bd795e67-bd39-4e8f-91ee-f8ea62d32554 to disappear Aug 14 14:35:07.935: INFO: Pod downward-api-bd795e67-bd39-4e8f-91ee-f8ea62d32554 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:35:07.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2627" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1114,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:35:07.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-98010cbd-ca0e-4218-aa0c-a6709f3e59cf STEP: Creating a pod to test consume configMaps Aug 14 14:35:08.076: INFO: Waiting up to 5m0s for pod "pod-configmaps-90ec2464-abc8-4888-b1dd-19a3ded7d0b9" in namespace "configmap-8570" to be "Succeeded or Failed" Aug 14 14:35:08.113: INFO: Pod "pod-configmaps-90ec2464-abc8-4888-b1dd-19a3ded7d0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.907889ms Aug 14 14:35:10.121: INFO: Pod "pod-configmaps-90ec2464-abc8-4888-b1dd-19a3ded7d0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044534538s Aug 14 14:35:12.130: INFO: Pod "pod-configmaps-90ec2464-abc8-4888-b1dd-19a3ded7d0b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05323274s STEP: Saw pod success Aug 14 14:35:12.130: INFO: Pod "pod-configmaps-90ec2464-abc8-4888-b1dd-19a3ded7d0b9" satisfied condition "Succeeded or Failed" Aug 14 14:35:12.136: INFO: Trying to get logs from node kali-worker pod pod-configmaps-90ec2464-abc8-4888-b1dd-19a3ded7d0b9 container configmap-volume-test: STEP: delete the pod Aug 14 14:35:12.194: INFO: Waiting for pod pod-configmaps-90ec2464-abc8-4888-b1dd-19a3ded7d0b9 to disappear Aug 14 14:35:12.199: INFO: Pod pod-configmaps-90ec2464-abc8-4888-b1dd-19a3ded7d0b9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:35:12.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8570" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:35:12.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Aug 14 14:35:16.876: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7187 pod-service-account-fa6de517-095d-4b54-ab1b-299b28ab3128 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 14 14:35:18.330: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7187 pod-service-account-fa6de517-095d-4b54-ab1b-299b28ab3128 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 14 14:35:19.762: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7187 pod-service-account-fa6de517-095d-4b54-ab1b-299b28ab3128 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:35:21.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7187" for this suite. • [SLOW TEST:9.101 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":67,"skipped":1149,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:35:21.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 14:35:23.709: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 14:35:25.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012523, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012523, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012523, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012523, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:35:27.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012523, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012523, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012523, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012523, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 14:35:30.776: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:35:30.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:35:32.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8550" for this suite. STEP: Destroying namespace "webhook-8550-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.066 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":68,"skipped":1153,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:35:32.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:35:37.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2945" for this suite. • [SLOW TEST:5.465 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":69,"skipped":1172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:35:37.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Aug 14 14:35:38.082: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 14 14:35:38.136: INFO: Waiting for terminating namespaces to be deleted... Aug 14 14:35:38.144: INFO: Logging pods the kubelet thinks is on node kali-worker before test Aug 14 14:35:38.170: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.170: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 14 14:35:38.170: INFO: rally-1cd7c17f-v3nmd8vz-589d46bdb9-zmgmx from c-rally-1cd7c17f-mfngbtu6 started at 2020-08-13 20:24:45 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.170: INFO: Container rally-1cd7c17f-v3nmd8vz ready: true, restart count 0 Aug 14 14:35:38.170: INFO: rally-274c9b85-l73e7cqd from c-rally-274c9b85-kujwveos started at 2020-08-14 14:34:30 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.170: INFO: Container rally-274c9b85-l73e7cqd ready: true, restart count 0 Aug 14 14:35:38.170: INFO: rally-466602a1-db17uwyh-z26cp from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:59:13 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.170: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 14 14:35:38.170: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.170: INFO: Container kube-proxy ready: true, restart count 0 Aug 14 14:35:38.170: INFO: rally-466602a1-db17uwyh-6xgdb from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:51:36 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.170: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 14 14:35:38.170: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.171: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 14 14:35:38.171: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.171: INFO: Container kindnet-cni ready: true, restart count 1 Aug 14 14:35:38.171: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Aug 14 14:35:38.203: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.204: INFO: Container kube-proxy ready: true, restart count 0 Aug 14 14:35:38.204: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.204: INFO: Container kindnet-cni ready: true, restart count 1 Aug 14 14:35:38.204: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.204: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 14 14:35:38.204: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.204: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 14 14:35:38.204: INFO: rally-7104017d-j5l4uv4e-0 from c-rally-7104017d-2oejvhl7 started at 2020-08-11 18:51:39 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.204: INFO: Container rally-7104017d-j5l4uv4e ready: true, restart count 1 Aug 14 14:35:38.204: INFO: rally-1cd7c17f-v3nmd8vz-589d46bdb9-h9wtg from c-rally-1cd7c17f-mfngbtu6 started at 2020-08-13 20:24:47 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.204: INFO: Container rally-1cd7c17f-v3nmd8vz ready: true, restart count 0 Aug 14 14:35:38.204: INFO: rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh from c-rally-6c5ea4be-pyo3sp3v started at 2020-08-11 18:16:03 +0000 UTC (1 container statuses recorded) Aug 14 14:35:38.204: INFO: Container rally-6c5ea4be-96nyoha6 ready: true, restart count 72 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-69bd85af-08e3-4adc-9d4f-452cee658e66 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-69bd85af-08e3-4adc-9d4f-452cee658e66 off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-69bd85af-08e3-4adc-9d4f-452cee658e66 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:40:48.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7572" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:310.568 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":70,"skipped":1202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:40:48.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Aug 14 14:40:48.805: INFO: Waiting up to 5m0s for pod "var-expansion-3999ddda-8168-4ad2-8bce-6f341cc64a65" in namespace "var-expansion-7761" to be "Succeeded or Failed" Aug 14 14:40:48.824: INFO: Pod "var-expansion-3999ddda-8168-4ad2-8bce-6f341cc64a65": Phase="Pending", Reason="", readiness=false. Elapsed: 18.921853ms Aug 14 14:40:50.830: INFO: Pod "var-expansion-3999ddda-8168-4ad2-8bce-6f341cc64a65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024672867s Aug 14 14:40:52.836: INFO: Pod "var-expansion-3999ddda-8168-4ad2-8bce-6f341cc64a65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030765161s Aug 14 14:40:55.122: INFO: Pod "var-expansion-3999ddda-8168-4ad2-8bce-6f341cc64a65": Phase="Running", Reason="", readiness=true. Elapsed: 6.316242649s Aug 14 14:40:57.394: INFO: Pod "var-expansion-3999ddda-8168-4ad2-8bce-6f341cc64a65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.58895118s STEP: Saw pod success Aug 14 14:40:57.395: INFO: Pod "var-expansion-3999ddda-8168-4ad2-8bce-6f341cc64a65" satisfied condition "Succeeded or Failed" Aug 14 14:40:57.456: INFO: Trying to get logs from node kali-worker pod var-expansion-3999ddda-8168-4ad2-8bce-6f341cc64a65 container dapi-container: STEP: delete the pod Aug 14 14:40:58.317: INFO: Waiting for pod var-expansion-3999ddda-8168-4ad2-8bce-6f341cc64a65 to disappear Aug 14 14:40:58.340: INFO: Pod var-expansion-3999ddda-8168-4ad2-8bce-6f341cc64a65 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:40:58.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7761" for this suite. • [SLOW TEST:10.223 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1227,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:40:58.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 14:41:01.609: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 14:41:03.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012861, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012861, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012861, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012861, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:41:05.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012861, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012861, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012861, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733012861, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 14:41:08.678: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:41:08.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9292" for this suite. STEP: Destroying namespace "webhook-9292-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.331 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":72,"skipped":1238,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:41:08.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-3ef56f21-b9a5-42f9-9b50-75040f189ad3 in namespace container-probe-8033 Aug 14 14:41:13.077: INFO: Started pod busybox-3ef56f21-b9a5-42f9-9b50-75040f189ad3 in namespace container-probe-8033 STEP: checking the pod's current state and verifying that restartCount is present Aug 14 14:41:13.082: INFO: Initial restart count of pod busybox-3ef56f21-b9a5-42f9-9b50-75040f189ad3 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:45:14.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8033" for this suite. • [SLOW TEST:245.038 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1252,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:45:14.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Aug 14 14:45:14.188: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 14 14:45:14.215: INFO: Waiting for terminating namespaces to be deleted... Aug 14 14:45:14.221: INFO: Logging pods the kubelet thinks is on node kali-worker before test Aug 14 14:45:14.255: INFO: rally-466602a1-db17uwyh-z26cp from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:59:13 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.255: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 14 14:45:14.255: INFO: rally-466602a1-db17uwyh-6xgdb from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:51:36 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.255: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 14 14:45:14.255: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.255: INFO: Container kube-proxy ready: true, restart count 0 Aug 14 14:45:14.255: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.255: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 14 14:45:14.255: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.256: INFO: Container kindnet-cni ready: true, restart count 1 Aug 14 14:45:14.256: INFO: rally-1cd7c17f-v3nmd8vz-589d46bdb9-zmgmx from c-rally-1cd7c17f-mfngbtu6 started at 2020-08-13 20:24:45 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.256: INFO: Container rally-1cd7c17f-v3nmd8vz ready: true, restart count 0 Aug 14 14:45:14.256: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.256: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 14 14:45:14.256: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Aug 14 14:45:14.289: INFO: rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh from c-rally-6c5ea4be-pyo3sp3v started at 2020-08-11 18:16:03 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.289: INFO: Container rally-6c5ea4be-96nyoha6 ready: true, restart count 73 Aug 14 14:45:14.289: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.289: INFO: Container kube-proxy ready: true, restart count 0 Aug 14 14:45:14.289: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.289: INFO: Container kindnet-cni ready: true, restart count 1 Aug 14 14:45:14.289: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.289: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 14 14:45:14.289: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.289: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 14 14:45:14.289: INFO: rally-7104017d-j5l4uv4e-0 from c-rally-7104017d-2oejvhl7 started at 2020-08-11 18:51:39 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.289: INFO: Container rally-7104017d-j5l4uv4e ready: true, restart count 1 Aug 14 14:45:14.289: INFO: rally-1cd7c17f-v3nmd8vz-589d46bdb9-h9wtg from c-rally-1cd7c17f-mfngbtu6 started at 2020-08-13 20:24:47 +0000 UTC (1 container statuses recorded) Aug 14 14:45:14.289: INFO: Container rally-1cd7c17f-v3nmd8vz ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0df2e3eb-834b-40ff-ba82-a351fed142e0 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-0df2e3eb-834b-40ff-ba82-a351fed142e0 off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-0df2e3eb-834b-40ff-ba82-a351fed142e0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:45:26.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6934" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:12.499 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":74,"skipped":1255,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:45:26.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:45:44.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4584" for this suite. • [SLOW TEST:18.078 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":75,"skipped":1267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:45:44.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 14 14:45:44.942: INFO: Waiting up to 5m0s for pod "pod-fa2529c1-420f-4ca2-b02f-a49545e90ade" in namespace "emptydir-8721" to be "Succeeded or Failed" Aug 14 14:45:44.999: INFO: Pod "pod-fa2529c1-420f-4ca2-b02f-a49545e90ade": Phase="Pending", Reason="", readiness=false. Elapsed: 57.079292ms Aug 14 14:45:48.319: INFO: Pod "pod-fa2529c1-420f-4ca2-b02f-a49545e90ade": Phase="Pending", Reason="", readiness=false. Elapsed: 3.376953905s Aug 14 14:45:50.576: INFO: Pod "pod-fa2529c1-420f-4ca2-b02f-a49545e90ade": Phase="Pending", Reason="", readiness=false. Elapsed: 5.633526212s Aug 14 14:45:52.736: INFO: Pod "pod-fa2529c1-420f-4ca2-b02f-a49545e90ade": Phase="Running", Reason="", readiness=true. Elapsed: 7.79371034s Aug 14 14:45:54.744: INFO: Pod "pod-fa2529c1-420f-4ca2-b02f-a49545e90ade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.80162863s STEP: Saw pod success Aug 14 14:45:54.744: INFO: Pod "pod-fa2529c1-420f-4ca2-b02f-a49545e90ade" satisfied condition "Succeeded or Failed" Aug 14 14:45:54.750: INFO: Trying to get logs from node kali-worker pod pod-fa2529c1-420f-4ca2-b02f-a49545e90ade container test-container: STEP: delete the pod Aug 14 14:45:55.588: INFO: Waiting for pod pod-fa2529c1-420f-4ca2-b02f-a49545e90ade to disappear Aug 14 14:45:56.015: INFO: Pod pod-fa2529c1-420f-4ca2-b02f-a49545e90ade no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:45:56.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8721" for this suite. • [SLOW TEST:11.543 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1296,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:45:56.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:45:56.217: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:46:04.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-576" for this suite. • [SLOW TEST:8.863 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":77,"skipped":1296,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:46:05.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:46:24.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2486" for this suite. • [SLOW TEST:19.415 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":78,"skipped":1305,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:46:24.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-7792 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7792 to expose endpoints map[] Aug 14 14:46:24.669: INFO: successfully validated that service multi-endpoint-test in namespace services-7792 exposes endpoints map[] (34.82793ms elapsed) STEP: Creating pod pod1 in namespace services-7792 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7792 to expose endpoints map[pod1:[100]] Aug 14 14:46:29.302: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.606897946s elapsed, will retry) Aug 14 14:46:32.859: INFO: successfully validated that service multi-endpoint-test in namespace services-7792 exposes endpoints map[pod1:[100]] (8.164189954s elapsed) STEP: Creating pod pod2 in namespace services-7792 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7792 to expose endpoints map[pod1:[100] pod2:[101]] Aug 14 14:46:37.784: INFO: Unexpected endpoints: found map[5876c9fa-ea40-4338-94ad-4281835eea48:[100]], expected map[pod1:[100] pod2:[101]] (4.686217344s elapsed, will retry) Aug 14 14:46:38.796: INFO: successfully validated that service multi-endpoint-test in namespace services-7792 exposes endpoints map[pod1:[100] pod2:[101]] (5.699072766s elapsed) STEP: Deleting pod pod1 in namespace services-7792 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7792 to expose endpoints map[pod2:[101]] Aug 14 14:46:39.049: INFO: successfully validated that service multi-endpoint-test in namespace services-7792 exposes endpoints map[pod2:[101]] (246.731232ms elapsed) STEP: Deleting pod pod2 in namespace services-7792 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7792 to expose endpoints map[] Aug 14 14:46:39.754: INFO: successfully validated that service multi-endpoint-test in namespace services-7792 exposes endpoints map[] (698.054622ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:46:40.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7792" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:16.702 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":79,"skipped":1310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:46:41.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Aug 14 14:46:42.073: INFO: Waiting up to 5m0s for pod "var-expansion-590db1dd-e09c-42fe-a228-4c8032bf59d3" in namespace "var-expansion-9193" to be "Succeeded or Failed" Aug 14 14:46:42.429: INFO: Pod "var-expansion-590db1dd-e09c-42fe-a228-4c8032bf59d3": Phase="Pending", Reason="", readiness=false. Elapsed: 356.287195ms Aug 14 14:46:44.720: INFO: Pod "var-expansion-590db1dd-e09c-42fe-a228-4c8032bf59d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.646601927s Aug 14 14:46:47.235: INFO: Pod "var-expansion-590db1dd-e09c-42fe-a228-4c8032bf59d3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.162128016s Aug 14 14:46:49.253: INFO: Pod "var-expansion-590db1dd-e09c-42fe-a228-4c8032bf59d3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.179571951s Aug 14 14:46:51.258: INFO: Pod "var-expansion-590db1dd-e09c-42fe-a228-4c8032bf59d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.185235712s STEP: Saw pod success Aug 14 14:46:51.259: INFO: Pod "var-expansion-590db1dd-e09c-42fe-a228-4c8032bf59d3" satisfied condition "Succeeded or Failed" Aug 14 14:46:51.262: INFO: Trying to get logs from node kali-worker pod var-expansion-590db1dd-e09c-42fe-a228-4c8032bf59d3 container dapi-container: STEP: delete the pod Aug 14 14:46:51.835: INFO: Waiting for pod var-expansion-590db1dd-e09c-42fe-a228-4c8032bf59d3 to disappear Aug 14 14:46:52.163: INFO: Pod var-expansion-590db1dd-e09c-42fe-a228-4c8032bf59d3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:46:52.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9193" for this suite. • [SLOW TEST:11.038 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:46:52.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 14:46:52.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43a440e4-8292-4293-8afc-8705b6d40eb2" in namespace "projected-1989" to be "Succeeded or Failed" Aug 14 14:46:52.592: INFO: Pod "downwardapi-volume-43a440e4-8292-4293-8afc-8705b6d40eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 55.421637ms Aug 14 14:46:54.597: INFO: Pod "downwardapi-volume-43a440e4-8292-4293-8afc-8705b6d40eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060824267s Aug 14 14:46:56.782: INFO: Pod "downwardapi-volume-43a440e4-8292-4293-8afc-8705b6d40eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24542645s Aug 14 14:46:58.788: INFO: Pod "downwardapi-volume-43a440e4-8292-4293-8afc-8705b6d40eb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.25102178s STEP: Saw pod success Aug 14 14:46:58.788: INFO: Pod "downwardapi-volume-43a440e4-8292-4293-8afc-8705b6d40eb2" satisfied condition "Succeeded or Failed" Aug 14 14:46:58.793: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-43a440e4-8292-4293-8afc-8705b6d40eb2 container client-container: STEP: delete the pod Aug 14 14:46:58.830: INFO: Waiting for pod downwardapi-volume-43a440e4-8292-4293-8afc-8705b6d40eb2 to disappear Aug 14 14:46:58.898: INFO: Pod downwardapi-volume-43a440e4-8292-4293-8afc-8705b6d40eb2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:46:58.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1989" for this suite. • [SLOW TEST:6.807 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1379,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:46:58.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 14 14:47:03.616: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 14 14:47:06.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:47:08.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:47:10.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013223, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 14:47:13.560: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:47:13.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:47:14.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3214" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:16.054 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":82,"skipped":1396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:47:15.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Aug 14 14:47:16.150: INFO: created pod pod-service-account-defaultsa Aug 14 14:47:16.151: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 14 14:47:16.205: INFO: created pod pod-service-account-mountsa Aug 14 14:47:16.205: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 14 14:47:16.604: INFO: created pod pod-service-account-nomountsa Aug 14 14:47:16.604: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 14 14:47:16.787: INFO: created pod pod-service-account-defaultsa-mountspec Aug 14 14:47:16.787: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 14 14:47:16.865: INFO: created pod pod-service-account-mountsa-mountspec Aug 14 14:47:16.865: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 14 14:47:16.926: INFO: created pod pod-service-account-nomountsa-mountspec Aug 14 14:47:16.926: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 14 14:47:16.976: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 14 14:47:16.976: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 14 14:47:17.068: INFO: created pod pod-service-account-mountsa-nomountspec Aug 14 14:47:17.068: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 14 14:47:17.092: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 14 14:47:17.092: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:47:17.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5943" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":83,"skipped":1463,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:47:17.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Aug 14 14:47:19.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-6300 -- logs-generator --log-lines-total 100 --run-duration 20s' Aug 14 14:47:47.769: INFO: stderr: "" Aug 14 14:47:47.769: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Aug 14 14:47:47.770: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Aug 14 14:47:47.771: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6300" to be "running and ready, or succeeded" Aug 14 14:47:49.044: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1.272644509s Aug 14 14:47:51.049: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.278247909s Aug 14 14:47:53.270: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.498988872s Aug 14 14:47:55.277: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 7.505670202s Aug 14 14:47:55.277: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Aug 14 14:47:55.278: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Aug 14 14:47:55.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6300' Aug 14 14:47:57.529: INFO: stderr: "" Aug 14 14:47:57.529: INFO: stdout: "I0814 14:47:53.310525 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/d8pb 531\nI0814 14:47:53.510671 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/mlw 222\nI0814 14:47:53.711506 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/zcz 359\nI0814 14:47:53.910750 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/tcr 457\nI0814 14:47:54.110727 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/j4v7 492\nI0814 14:47:54.310698 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/jj8 233\nI0814 14:47:54.510753 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/c8v 488\nI0814 14:47:54.710810 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/zgq6 575\nI0814 14:47:54.910712 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/sx4 473\nI0814 14:47:55.110692 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/82z 283\nI0814 14:47:55.310723 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/8tln 487\nI0814 14:47:55.510732 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/qwxs 370\nI0814 14:47:55.710712 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/s94g 558\nI0814 14:47:55.910677 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/wgmv 240\nI0814 14:47:56.110679 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/t97 371\nI0814 14:47:56.311158 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/b6b 331\nI0814 14:47:56.510677 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/m564 567\nI0814 14:47:56.710685 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/9xjd 295\nI0814 14:47:56.910718 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/6jx 492\nI0814 14:47:57.110719 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/7lc 322\nI0814 14:47:57.310729 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/8469 278\nI0814 14:47:57.510788 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/hpmx 310\n" STEP: limiting log lines Aug 14 14:47:57.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6300 --tail=1' Aug 14 14:47:58.863: INFO: stderr: "" Aug 14 14:47:58.863: INFO: stdout: "I0814 14:47:58.710686 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/ns/pods/68mr 345\n" Aug 14 14:47:58.864: INFO: got output "I0814 14:47:58.710686 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/ns/pods/68mr 345\n" STEP: limiting log bytes Aug 14 14:47:58.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6300 --limit-bytes=1' Aug 14 14:48:00.298: INFO: stderr: "" Aug 14 14:48:00.298: INFO: stdout: "I" Aug 14 14:48:00.298: INFO: got output "I" STEP: exposing timestamps Aug 14 14:48:00.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6300 --tail=1 --timestamps' Aug 14 14:48:01.575: INFO: stderr: "" Aug 14 14:48:01.576: INFO: stdout: "2020-08-14T14:48:01.510854481Z I0814 14:48:01.510669 1 logs_generator.go:76] 41 POST /api/v1/namespaces/default/pods/gvf 550\n" Aug 14 14:48:01.576: INFO: got output "2020-08-14T14:48:01.510854481Z I0814 14:48:01.510669 1 logs_generator.go:76] 41 POST /api/v1/namespaces/default/pods/gvf 550\n" STEP: restricting to a time range Aug 14 14:48:04.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6300 --since=1s' Aug 14 14:48:05.378: INFO: stderr: "" Aug 14 14:48:05.378: INFO: stdout: "I0814 14:48:04.510673 1 logs_generator.go:76] 56 POST /api/v1/namespaces/ns/pods/7nr 327\nI0814 14:48:04.710649 1 logs_generator.go:76] 57 POST /api/v1/namespaces/default/pods/z92 474\nI0814 14:48:04.910721 1 logs_generator.go:76] 58 GET /api/v1/namespaces/default/pods/2xkn 482\nI0814 14:48:05.110674 1 logs_generator.go:76] 59 PUT /api/v1/namespaces/ns/pods/q9g 303\nI0814 14:48:05.310683 1 logs_generator.go:76] 60 GET /api/v1/namespaces/kube-system/pods/bl5 246\n" Aug 14 14:48:05.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6300 --since=24h' Aug 14 14:48:06.697: INFO: stderr: "" Aug 14 14:48:06.698: INFO: stdout: "I0814 14:47:53.310525 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/d8pb 531\nI0814 14:47:53.510671 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/mlw 222\nI0814 14:47:53.711506 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/zcz 359\nI0814 14:47:53.910750 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/tcr 457\nI0814 14:47:54.110727 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/j4v7 492\nI0814 14:47:54.310698 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/jj8 233\nI0814 14:47:54.510753 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/c8v 488\nI0814 14:47:54.710810 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/zgq6 575\nI0814 14:47:54.910712 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/sx4 473\nI0814 14:47:55.110692 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/82z 283\nI0814 14:47:55.310723 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/8tln 487\nI0814 14:47:55.510732 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/qwxs 370\nI0814 14:47:55.710712 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/s94g 558\nI0814 14:47:55.910677 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/wgmv 240\nI0814 14:47:56.110679 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/t97 371\nI0814 14:47:56.311158 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/b6b 331\nI0814 14:47:56.510677 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/m564 567\nI0814 14:47:56.710685 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/9xjd 295\nI0814 14:47:56.910718 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/6jx 492\nI0814 14:47:57.110719 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/7lc 322\nI0814 14:47:57.310729 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/8469 278\nI0814 14:47:57.510788 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/hpmx 310\nI0814 14:47:57.710706 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/ww94 402\nI0814 14:47:57.910683 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/6gr 503\nI0814 14:47:58.110826 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/p8gf 324\nI0814 14:47:58.310664 1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/dnj 213\nI0814 14:47:58.510655 1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/nlr 413\nI0814 14:47:58.710686 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/ns/pods/68mr 345\nI0814 14:47:58.910722 1 logs_generator.go:76] 28 POST /api/v1/namespaces/kube-system/pods/f2x 207\nI0814 14:47:59.110683 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/kube-system/pods/47w 576\nI0814 14:47:59.310699 1 logs_generator.go:76] 30 POST /api/v1/namespaces/ns/pods/blwj 392\nI0814 14:47:59.510650 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/7p7 285\nI0814 14:47:59.710692 1 logs_generator.go:76] 32 POST /api/v1/namespaces/ns/pods/nxk 447\nI0814 14:47:59.910649 1 logs_generator.go:76] 33 POST /api/v1/namespaces/default/pods/5vfl 376\nI0814 14:48:00.110682 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/ns/pods/8h5j 359\nI0814 14:48:00.310710 1 logs_generator.go:76] 35 POST /api/v1/namespaces/ns/pods/f2p 270\nI0814 14:48:00.510768 1 logs_generator.go:76] 36 POST /api/v1/namespaces/ns/pods/l8d9 562\nI0814 14:48:00.710761 1 logs_generator.go:76] 37 PUT /api/v1/namespaces/kube-system/pods/mdg 463\nI0814 14:48:00.910690 1 logs_generator.go:76] 38 GET /api/v1/namespaces/kube-system/pods/nvs 548\nI0814 14:48:01.110706 1 logs_generator.go:76] 39 POST /api/v1/namespaces/default/pods/jxd 500\nI0814 14:48:01.310639 1 logs_generator.go:76] 40 POST /api/v1/namespaces/ns/pods/4n6m 227\nI0814 14:48:01.510669 1 logs_generator.go:76] 41 POST /api/v1/namespaces/default/pods/gvf 550\nI0814 14:48:01.710734 1 logs_generator.go:76] 42 POST /api/v1/namespaces/ns/pods/n92w 562\nI0814 14:48:01.910707 1 logs_generator.go:76] 43 GET /api/v1/namespaces/default/pods/vtr 531\nI0814 14:48:02.110719 1 logs_generator.go:76] 44 PUT /api/v1/namespaces/kube-system/pods/qrns 516\nI0814 14:48:02.310679 1 logs_generator.go:76] 45 GET /api/v1/namespaces/ns/pods/cz8 557\nI0814 14:48:02.510778 1 logs_generator.go:76] 46 POST /api/v1/namespaces/default/pods/64p 395\nI0814 14:48:02.710800 1 logs_generator.go:76] 47 GET /api/v1/namespaces/kube-system/pods/2zr 550\nI0814 14:48:02.910732 1 logs_generator.go:76] 48 PUT /api/v1/namespaces/default/pods/b9w 598\nI0814 14:48:03.110739 1 logs_generator.go:76] 49 GET /api/v1/namespaces/kube-system/pods/fv5w 463\nI0814 14:48:03.310759 1 logs_generator.go:76] 50 POST /api/v1/namespaces/ns/pods/skz 416\nI0814 14:48:03.510705 1 logs_generator.go:76] 51 POST /api/v1/namespaces/default/pods/99x 272\nI0814 14:48:03.710725 1 logs_generator.go:76] 52 PUT /api/v1/namespaces/ns/pods/qfm 456\nI0814 14:48:03.910737 1 logs_generator.go:76] 53 PUT /api/v1/namespaces/ns/pods/l6h8 395\nI0814 14:48:04.110702 1 logs_generator.go:76] 54 POST /api/v1/namespaces/default/pods/7cz 216\nI0814 14:48:04.310666 1 logs_generator.go:76] 55 GET /api/v1/namespaces/default/pods/5w7w 582\nI0814 14:48:04.510673 1 logs_generator.go:76] 56 POST /api/v1/namespaces/ns/pods/7nr 327\nI0814 14:48:04.710649 1 logs_generator.go:76] 57 POST /api/v1/namespaces/default/pods/z92 474\nI0814 14:48:04.910721 1 logs_generator.go:76] 58 GET /api/v1/namespaces/default/pods/2xkn 482\nI0814 14:48:05.110674 1 logs_generator.go:76] 59 PUT /api/v1/namespaces/ns/pods/q9g 303\nI0814 14:48:05.310683 1 logs_generator.go:76] 60 GET /api/v1/namespaces/kube-system/pods/bl5 246\nI0814 14:48:05.510669 1 logs_generator.go:76] 61 PUT /api/v1/namespaces/ns/pods/hfbc 299\nI0814 14:48:05.710694 1 logs_generator.go:76] 62 GET /api/v1/namespaces/kube-system/pods/j2k 271\nI0814 14:48:05.910667 1 logs_generator.go:76] 63 POST /api/v1/namespaces/ns/pods/tkwb 575\nI0814 14:48:06.110688 1 logs_generator.go:76] 64 GET /api/v1/namespaces/ns/pods/v4x 314\nI0814 14:48:06.310692 1 logs_generator.go:76] 65 PUT /api/v1/namespaces/kube-system/pods/b4v 456\nI0814 14:48:06.510666 1 logs_generator.go:76] 66 PUT /api/v1/namespaces/default/pods/qdk 537\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Aug 14 14:48:06.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6300' Aug 14 14:48:14.141: INFO: stderr: "" Aug 14 14:48:14.141: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:48:14.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6300" for this suite. • [SLOW TEST:56.245 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":84,"skipped":1471,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:48:14.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-1c9361b9-5a07-4b71-bd15-990c4301e4eb STEP: Creating a pod to test consume secrets Aug 14 14:48:15.119: INFO: Waiting up to 5m0s for pod "pod-secrets-e4c4ea2a-16eb-4d63-9d3d-f39f8f5d0692" in namespace "secrets-7857" to be "Succeeded or Failed" Aug 14 14:48:15.144: INFO: Pod "pod-secrets-e4c4ea2a-16eb-4d63-9d3d-f39f8f5d0692": Phase="Pending", Reason="", readiness=false. Elapsed: 25.513147ms Aug 14 14:48:17.151: INFO: Pod "pod-secrets-e4c4ea2a-16eb-4d63-9d3d-f39f8f5d0692": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032088817s Aug 14 14:48:19.261: INFO: Pod "pod-secrets-e4c4ea2a-16eb-4d63-9d3d-f39f8f5d0692": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14209496s Aug 14 14:48:21.269: INFO: Pod "pod-secrets-e4c4ea2a-16eb-4d63-9d3d-f39f8f5d0692": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.149692053s STEP: Saw pod success Aug 14 14:48:21.269: INFO: Pod "pod-secrets-e4c4ea2a-16eb-4d63-9d3d-f39f8f5d0692" satisfied condition "Succeeded or Failed" Aug 14 14:48:21.273: INFO: Trying to get logs from node kali-worker pod pod-secrets-e4c4ea2a-16eb-4d63-9d3d-f39f8f5d0692 container secret-volume-test: STEP: delete the pod Aug 14 14:48:21.564: INFO: Waiting for pod pod-secrets-e4c4ea2a-16eb-4d63-9d3d-f39f8f5d0692 to disappear Aug 14 14:48:21.585: INFO: Pod pod-secrets-e4c4ea2a-16eb-4d63-9d3d-f39f8f5d0692 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:48:21.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7857" for this suite. • [SLOW TEST:7.459 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1533,"failed":0} SSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:48:21.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 14 14:48:56.805: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5462 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:48:56.805: INFO: >>> kubeConfig: /root/.kube/config I0814 14:48:56.906314 10 log.go:172] (0x40018a02c0) (0x4001e86d20) Create stream I0814 14:48:56.906511 10 log.go:172] (0x40018a02c0) (0x4001e86d20) Stream added, broadcasting: 1 I0814 14:48:56.911266 10 log.go:172] (0x40018a02c0) Reply frame received for 1 I0814 14:48:56.911516 10 log.go:172] (0x40018a02c0) (0x40027555e0) Create stream I0814 14:48:56.911641 10 log.go:172] (0x40018a02c0) (0x40027555e0) Stream added, broadcasting: 3 I0814 14:48:56.913537 10 log.go:172] (0x40018a02c0) Reply frame received for 3 I0814 14:48:56.913698 10 log.go:172] (0x40018a02c0) (0x4001732000) Create stream I0814 14:48:56.913808 10 log.go:172] (0x40018a02c0) (0x4001732000) Stream added, broadcasting: 5 I0814 14:48:56.915481 10 log.go:172] (0x40018a02c0) Reply frame received for 5 I0814 14:48:56.989309 10 log.go:172] (0x40018a02c0) Data frame received for 3 I0814 14:48:56.989492 10 log.go:172] (0x40027555e0) (3) Data frame handling I0814 14:48:56.989661 10 log.go:172] (0x40018a02c0) Data frame received for 5 I0814 14:48:56.989866 10 log.go:172] (0x4001732000) (5) Data frame handling I0814 14:48:56.990002 10 log.go:172] (0x40027555e0) (3) Data frame sent I0814 14:48:56.990111 10 log.go:172] (0x40018a02c0) Data frame received for 3 I0814 14:48:56.990199 10 log.go:172] (0x40027555e0) (3) Data frame handling I0814 14:48:56.991010 10 log.go:172] (0x40018a02c0) Data frame received for 1 I0814 14:48:56.991129 10 log.go:172] (0x4001e86d20) (1) Data frame handling I0814 14:48:56.991254 10 log.go:172] (0x4001e86d20) (1) Data frame sent I0814 14:48:56.991383 10 log.go:172] (0x40018a02c0) (0x4001e86d20) Stream removed, broadcasting: 1 I0814 14:48:56.991554 10 log.go:172] (0x40018a02c0) Go away received I0814 14:48:56.992101 10 log.go:172] (0x40018a02c0) (0x4001e86d20) Stream removed, broadcasting: 1 I0814 14:48:56.992277 10 log.go:172] (0x40018a02c0) (0x40027555e0) Stream removed, broadcasting: 3 I0814 14:48:56.992420 10 log.go:172] (0x40018a02c0) (0x4001732000) Stream removed, broadcasting: 5 Aug 14 14:48:56.992: INFO: Exec stderr: "" Aug 14 14:48:56.993: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5462 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:48:56.993: INFO: >>> kubeConfig: /root/.kube/config I0814 14:48:57.055037 10 log.go:172] (0x40024231e0) (0x40027cc3c0) Create stream I0814 14:48:57.055295 10 log.go:172] (0x40024231e0) (0x40027cc3c0) Stream added, broadcasting: 1 I0814 14:48:57.060359 10 log.go:172] (0x40024231e0) Reply frame received for 1 I0814 14:48:57.060535 10 log.go:172] (0x40024231e0) (0x40027cc460) Create stream I0814 14:48:57.060617 10 log.go:172] (0x40024231e0) (0x40027cc460) Stream added, broadcasting: 3 I0814 14:48:57.062143 10 log.go:172] (0x40024231e0) Reply frame received for 3 I0814 14:48:57.062300 10 log.go:172] (0x40024231e0) (0x40027cc500) Create stream I0814 14:48:57.062395 10 log.go:172] (0x40024231e0) (0x40027cc500) Stream added, broadcasting: 5 I0814 14:48:57.064072 10 log.go:172] (0x40024231e0) Reply frame received for 5 I0814 14:48:57.122916 10 log.go:172] (0x40024231e0) Data frame received for 3 I0814 14:48:57.123145 10 log.go:172] (0x40027cc460) (3) Data frame handling I0814 14:48:57.123371 10 log.go:172] (0x40024231e0) Data frame received for 5 I0814 14:48:57.123591 10 log.go:172] (0x40027cc500) (5) Data frame handling I0814 14:48:57.123744 10 log.go:172] (0x40027cc460) (3) Data frame sent I0814 14:48:57.123898 10 log.go:172] (0x40024231e0) Data frame received for 3 I0814 14:48:57.124040 10 log.go:172] (0x40027cc460) (3) Data frame handling I0814 14:48:57.124224 10 log.go:172] (0x40024231e0) Data frame received for 1 I0814 14:48:57.124368 10 log.go:172] (0x40027cc3c0) (1) Data frame handling I0814 14:48:57.124506 10 log.go:172] (0x40027cc3c0) (1) Data frame sent I0814 14:48:57.124657 10 log.go:172] (0x40024231e0) (0x40027cc3c0) Stream removed, broadcasting: 1 I0814 14:48:57.124949 10 log.go:172] (0x40024231e0) Go away received I0814 14:48:57.125395 10 log.go:172] (0x40024231e0) (0x40027cc3c0) Stream removed, broadcasting: 1 I0814 14:48:57.125592 10 log.go:172] (0x40024231e0) (0x40027cc460) Stream removed, broadcasting: 3 I0814 14:48:57.125741 10 log.go:172] (0x40024231e0) (0x40027cc500) Stream removed, broadcasting: 5 Aug 14 14:48:57.125: INFO: Exec stderr: "" Aug 14 14:48:57.126: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5462 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:48:57.126: INFO: >>> kubeConfig: /root/.kube/config I0814 14:48:57.183508 10 log.go:172] (0x400281c4d0) (0x40017326e0) Create stream I0814 14:48:57.183662 10 log.go:172] (0x400281c4d0) (0x40017326e0) Stream added, broadcasting: 1 I0814 14:48:57.190224 10 log.go:172] (0x400281c4d0) Reply frame received for 1 I0814 14:48:57.190414 10 log.go:172] (0x400281c4d0) (0x4001edd680) Create stream I0814 14:48:57.190503 10 log.go:172] (0x400281c4d0) (0x4001edd680) Stream added, broadcasting: 3 I0814 14:48:57.192413 10 log.go:172] (0x400281c4d0) Reply frame received for 3 I0814 14:48:57.192568 10 log.go:172] (0x400281c4d0) (0x4001732780) Create stream I0814 14:48:57.192654 10 log.go:172] (0x400281c4d0) (0x4001732780) Stream added, broadcasting: 5 I0814 14:48:57.194856 10 log.go:172] (0x400281c4d0) Reply frame received for 5 I0814 14:48:57.265030 10 log.go:172] (0x400281c4d0) Data frame received for 3 I0814 14:48:57.265273 10 log.go:172] (0x4001edd680) (3) Data frame handling I0814 14:48:57.265381 10 log.go:172] (0x400281c4d0) Data frame received for 5 I0814 14:48:57.265494 10 log.go:172] (0x4001732780) (5) Data frame handling I0814 14:48:57.265589 10 log.go:172] (0x4001edd680) (3) Data frame sent I0814 14:48:57.265728 10 log.go:172] (0x400281c4d0) Data frame received for 1 I0814 14:48:57.265868 10 log.go:172] (0x40017326e0) (1) Data frame handling I0814 14:48:57.265991 10 log.go:172] (0x400281c4d0) Data frame received for 3 I0814 14:48:57.266128 10 log.go:172] (0x4001edd680) (3) Data frame handling I0814 14:48:57.266259 10 log.go:172] (0x40017326e0) (1) Data frame sent I0814 14:48:57.266359 10 log.go:172] (0x400281c4d0) (0x40017326e0) Stream removed, broadcasting: 1 I0814 14:48:57.266467 10 log.go:172] (0x400281c4d0) Go away received I0814 14:48:57.266850 10 log.go:172] (0x400281c4d0) (0x40017326e0) Stream removed, broadcasting: 1 I0814 14:48:57.266967 10 log.go:172] (0x400281c4d0) (0x4001edd680) Stream removed, broadcasting: 3 I0814 14:48:57.267042 10 log.go:172] (0x400281c4d0) (0x4001732780) Stream removed, broadcasting: 5 Aug 14 14:48:57.267: INFO: Exec stderr: "" Aug 14 14:48:57.267: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5462 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:48:57.267: INFO: >>> kubeConfig: /root/.kube/config I0814 14:48:57.320145 10 log.go:172] (0x40029d4630) (0x4001eddb80) Create stream I0814 14:48:57.320337 10 log.go:172] (0x40029d4630) (0x4001eddb80) Stream added, broadcasting: 1 I0814 14:48:57.323242 10 log.go:172] (0x40029d4630) Reply frame received for 1 I0814 14:48:57.323399 10 log.go:172] (0x40029d4630) (0x4001732960) Create stream I0814 14:48:57.323481 10 log.go:172] (0x40029d4630) (0x4001732960) Stream added, broadcasting: 3 I0814 14:48:57.325017 10 log.go:172] (0x40029d4630) Reply frame received for 3 I0814 14:48:57.325111 10 log.go:172] (0x40029d4630) (0x4001eddc20) Create stream I0814 14:48:57.325164 10 log.go:172] (0x40029d4630) (0x4001eddc20) Stream added, broadcasting: 5 I0814 14:48:57.326262 10 log.go:172] (0x40029d4630) Reply frame received for 5 I0814 14:48:57.385023 10 log.go:172] (0x40029d4630) Data frame received for 3 I0814 14:48:57.385268 10 log.go:172] (0x4001732960) (3) Data frame handling I0814 14:48:57.385484 10 log.go:172] (0x4001732960) (3) Data frame sent I0814 14:48:57.385687 10 log.go:172] (0x40029d4630) Data frame received for 3 I0814 14:48:57.385918 10 log.go:172] (0x4001732960) (3) Data frame handling I0814 14:48:57.386179 10 log.go:172] (0x40029d4630) Data frame received for 5 I0814 14:48:57.386385 10 log.go:172] (0x4001eddc20) (5) Data frame handling I0814 14:48:57.386784 10 log.go:172] (0x40029d4630) Data frame received for 1 I0814 14:48:57.387044 10 log.go:172] (0x4001eddb80) (1) Data frame handling I0814 14:48:57.387201 10 log.go:172] (0x4001eddb80) (1) Data frame sent I0814 14:48:57.387356 10 log.go:172] (0x40029d4630) (0x4001eddb80) Stream removed, broadcasting: 1 I0814 14:48:57.387546 10 log.go:172] (0x40029d4630) Go away received I0814 14:48:57.388254 10 log.go:172] (0x40029d4630) (0x4001eddb80) Stream removed, broadcasting: 1 I0814 14:48:57.388437 10 log.go:172] (0x40029d4630) (0x4001732960) Stream removed, broadcasting: 3 I0814 14:48:57.388578 10 log.go:172] (0x40029d4630) (0x4001eddc20) Stream removed, broadcasting: 5 Aug 14 14:48:57.388: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 14 14:48:57.389: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5462 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:48:57.389: INFO: >>> kubeConfig: /root/.kube/config I0814 14:48:57.449467 10 log.go:172] (0x40024238c0) (0x40027cc820) Create stream I0814 14:48:57.449667 10 log.go:172] (0x40024238c0) (0x40027cc820) Stream added, broadcasting: 1 I0814 14:48:57.453430 10 log.go:172] (0x40024238c0) Reply frame received for 1 I0814 14:48:57.453641 10 log.go:172] (0x40024238c0) (0x4001e86dc0) Create stream I0814 14:48:57.453781 10 log.go:172] (0x40024238c0) (0x4001e86dc0) Stream added, broadcasting: 3 I0814 14:48:57.455344 10 log.go:172] (0x40024238c0) Reply frame received for 3 I0814 14:48:57.455515 10 log.go:172] (0x40024238c0) (0x40027ccaa0) Create stream I0814 14:48:57.455631 10 log.go:172] (0x40024238c0) (0x40027ccaa0) Stream added, broadcasting: 5 I0814 14:48:57.457387 10 log.go:172] (0x40024238c0) Reply frame received for 5 I0814 14:48:57.522347 10 log.go:172] (0x40024238c0) Data frame received for 5 I0814 14:48:57.522540 10 log.go:172] (0x40027ccaa0) (5) Data frame handling I0814 14:48:57.522745 10 log.go:172] (0x40024238c0) Data frame received for 3 I0814 14:48:57.522934 10 log.go:172] (0x4001e86dc0) (3) Data frame handling I0814 14:48:57.523132 10 log.go:172] (0x4001e86dc0) (3) Data frame sent I0814 14:48:57.523252 10 log.go:172] (0x40024238c0) Data frame received for 3 I0814 14:48:57.523345 10 log.go:172] (0x4001e86dc0) (3) Data frame handling I0814 14:48:57.524053 10 log.go:172] (0x40024238c0) Data frame received for 1 I0814 14:48:57.524205 10 log.go:172] (0x40027cc820) (1) Data frame handling I0814 14:48:57.524362 10 log.go:172] (0x40027cc820) (1) Data frame sent I0814 14:48:57.524510 10 log.go:172] (0x40024238c0) (0x40027cc820) Stream removed, broadcasting: 1 I0814 14:48:57.524720 10 log.go:172] (0x40024238c0) Go away received I0814 14:48:57.525856 10 log.go:172] (0x40024238c0) (0x40027cc820) Stream removed, broadcasting: 1 I0814 14:48:57.525934 10 log.go:172] (0x40024238c0) (0x4001e86dc0) Stream removed, broadcasting: 3 I0814 14:48:57.526005 10 log.go:172] (0x40024238c0) (0x40027ccaa0) Stream removed, broadcasting: 5 Aug 14 14:48:57.526: INFO: Exec stderr: "" Aug 14 14:48:57.526: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5462 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:48:57.526: INFO: >>> kubeConfig: /root/.kube/config I0814 14:48:57.583937 10 log.go:172] (0x400281cbb0) (0x4001732c80) Create stream I0814 14:48:57.584317 10 log.go:172] (0x400281cbb0) (0x4001732c80) Stream added, broadcasting: 1 I0814 14:48:57.590247 10 log.go:172] (0x400281cbb0) Reply frame received for 1 I0814 14:48:57.590427 10 log.go:172] (0x400281cbb0) (0x4002392500) Create stream I0814 14:48:57.590499 10 log.go:172] (0x400281cbb0) (0x4002392500) Stream added, broadcasting: 3 I0814 14:48:57.591683 10 log.go:172] (0x400281cbb0) Reply frame received for 3 I0814 14:48:57.591813 10 log.go:172] (0x400281cbb0) (0x4001eddd60) Create stream I0814 14:48:57.591879 10 log.go:172] (0x400281cbb0) (0x4001eddd60) Stream added, broadcasting: 5 I0814 14:48:57.593293 10 log.go:172] (0x400281cbb0) Reply frame received for 5 I0814 14:48:57.647989 10 log.go:172] (0x400281cbb0) Data frame received for 5 I0814 14:48:57.648113 10 log.go:172] (0x4001eddd60) (5) Data frame handling I0814 14:48:57.648218 10 log.go:172] (0x400281cbb0) Data frame received for 3 I0814 14:48:57.648288 10 log.go:172] (0x4002392500) (3) Data frame handling I0814 14:48:57.648364 10 log.go:172] (0x4002392500) (3) Data frame sent I0814 14:48:57.648432 10 log.go:172] (0x400281cbb0) Data frame received for 3 I0814 14:48:57.648500 10 log.go:172] (0x4002392500) (3) Data frame handling I0814 14:48:57.649326 10 log.go:172] (0x400281cbb0) Data frame received for 1 I0814 14:48:57.649405 10 log.go:172] (0x4001732c80) (1) Data frame handling I0814 14:48:57.649482 10 log.go:172] (0x4001732c80) (1) Data frame sent I0814 14:48:57.649566 10 log.go:172] (0x400281cbb0) (0x4001732c80) Stream removed, broadcasting: 1 I0814 14:48:57.649685 10 log.go:172] (0x400281cbb0) Go away received I0814 14:48:57.649850 10 log.go:172] (0x400281cbb0) (0x4001732c80) Stream removed, broadcasting: 1 I0814 14:48:57.649931 10 log.go:172] (0x400281cbb0) (0x4002392500) Stream removed, broadcasting: 3 I0814 14:48:57.649995 10 log.go:172] (0x400281cbb0) (0x4001eddd60) Stream removed, broadcasting: 5 Aug 14 14:48:57.650: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 14 14:48:57.650: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5462 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:48:57.650: INFO: >>> kubeConfig: /root/.kube/config I0814 14:48:58.025243 10 log.go:172] (0x40029d4dc0) (0x4001eddf40) Create stream I0814 14:48:58.025503 10 log.go:172] (0x40029d4dc0) (0x4001eddf40) Stream added, broadcasting: 1 I0814 14:48:58.030093 10 log.go:172] (0x40029d4dc0) Reply frame received for 1 I0814 14:48:58.030317 10 log.go:172] (0x40029d4dc0) (0x40027ccb40) Create stream I0814 14:48:58.030495 10 log.go:172] (0x40029d4dc0) (0x40027ccb40) Stream added, broadcasting: 3 I0814 14:48:58.032339 10 log.go:172] (0x40029d4dc0) Reply frame received for 3 I0814 14:48:58.032524 10 log.go:172] (0x40029d4dc0) (0x40027ccbe0) Create stream I0814 14:48:58.032646 10 log.go:172] (0x40029d4dc0) (0x40027ccbe0) Stream added, broadcasting: 5 I0814 14:48:58.034319 10 log.go:172] (0x40029d4dc0) Reply frame received for 5 I0814 14:48:58.090830 10 log.go:172] (0x40029d4dc0) Data frame received for 3 I0814 14:48:58.091004 10 log.go:172] (0x40027ccb40) (3) Data frame handling I0814 14:48:58.091079 10 log.go:172] (0x40027ccb40) (3) Data frame sent I0814 14:48:58.091149 10 log.go:172] (0x40029d4dc0) Data frame received for 3 I0814 14:48:58.091208 10 log.go:172] (0x40027ccb40) (3) Data frame handling I0814 14:48:58.091276 10 log.go:172] (0x40029d4dc0) Data frame received for 5 I0814 14:48:58.091364 10 log.go:172] (0x40027ccbe0) (5) Data frame handling I0814 14:48:58.092259 10 log.go:172] (0x40029d4dc0) Data frame received for 1 I0814 14:48:58.092339 10 log.go:172] (0x4001eddf40) (1) Data frame handling I0814 14:48:58.092420 10 log.go:172] (0x4001eddf40) (1) Data frame sent I0814 14:48:58.092502 10 log.go:172] (0x40029d4dc0) (0x4001eddf40) Stream removed, broadcasting: 1 I0814 14:48:58.092717 10 log.go:172] (0x40029d4dc0) Go away received I0814 14:48:58.093195 10 log.go:172] (0x40029d4dc0) (0x4001eddf40) Stream removed, broadcasting: 1 I0814 14:48:58.093272 10 log.go:172] (0x40029d4dc0) (0x40027ccb40) Stream removed, broadcasting: 3 I0814 14:48:58.093342 10 log.go:172] (0x40029d4dc0) (0x40027ccbe0) Stream removed, broadcasting: 5 Aug 14 14:48:58.093: INFO: Exec stderr: "" Aug 14 14:48:58.093: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5462 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:48:58.093: INFO: >>> kubeConfig: /root/.kube/config I0814 14:48:58.231158 10 log.go:172] (0x40029d53f0) (0x40011c4460) Create stream I0814 14:48:58.231333 10 log.go:172] (0x40029d53f0) (0x40011c4460) Stream added, broadcasting: 1 I0814 14:48:58.235729 10 log.go:172] (0x40029d53f0) Reply frame received for 1 I0814 14:48:58.235885 10 log.go:172] (0x40029d53f0) (0x40027ccc80) Create stream I0814 14:48:58.235986 10 log.go:172] (0x40029d53f0) (0x40027ccc80) Stream added, broadcasting: 3 I0814 14:48:58.237349 10 log.go:172] (0x40029d53f0) Reply frame received for 3 I0814 14:48:58.237529 10 log.go:172] (0x40029d53f0) (0x40027ccd20) Create stream I0814 14:48:58.237637 10 log.go:172] (0x40029d53f0) (0x40027ccd20) Stream added, broadcasting: 5 I0814 14:48:58.239167 10 log.go:172] (0x40029d53f0) Reply frame received for 5 I0814 14:48:58.290062 10 log.go:172] (0x40029d53f0) Data frame received for 3 I0814 14:48:58.290225 10 log.go:172] (0x40027ccc80) (3) Data frame handling I0814 14:48:58.290315 10 log.go:172] (0x40029d53f0) Data frame received for 5 I0814 14:48:58.290426 10 log.go:172] (0x40027ccd20) (5) Data frame handling I0814 14:48:58.290503 10 log.go:172] (0x40027ccc80) (3) Data frame sent I0814 14:48:58.290635 10 log.go:172] (0x40029d53f0) Data frame received for 3 I0814 14:48:58.290734 10 log.go:172] (0x40027ccc80) (3) Data frame handling I0814 14:48:58.291461 10 log.go:172] (0x40029d53f0) Data frame received for 1 I0814 14:48:58.291563 10 log.go:172] (0x40011c4460) (1) Data frame handling I0814 14:48:58.291674 10 log.go:172] (0x40011c4460) (1) Data frame sent I0814 14:48:58.291806 10 log.go:172] (0x40029d53f0) (0x40011c4460) Stream removed, broadcasting: 1 I0814 14:48:58.291928 10 log.go:172] (0x40029d53f0) Go away received I0814 14:48:58.292243 10 log.go:172] (0x40029d53f0) (0x40011c4460) Stream removed, broadcasting: 1 I0814 14:48:58.292334 10 log.go:172] (0x40029d53f0) (0x40027ccc80) Stream removed, broadcasting: 3 I0814 14:48:58.292409 10 log.go:172] (0x40029d53f0) (0x40027ccd20) Stream removed, broadcasting: 5 Aug 14 14:48:58.292: INFO: Exec stderr: "" Aug 14 14:48:58.292: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5462 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:48:58.292: INFO: >>> kubeConfig: /root/.kube/config I0814 14:48:58.349476 10 log.go:172] (0x40018a08f0) (0x4001e870e0) Create stream I0814 14:48:58.349657 10 log.go:172] (0x40018a08f0) (0x4001e870e0) Stream added, broadcasting: 1 I0814 14:48:58.352647 10 log.go:172] (0x40018a08f0) Reply frame received for 1 I0814 14:48:58.352851 10 log.go:172] (0x40018a08f0) (0x40027ccdc0) Create stream I0814 14:48:58.352921 10 log.go:172] (0x40018a08f0) (0x40027ccdc0) Stream added, broadcasting: 3 I0814 14:48:58.353977 10 log.go:172] (0x40018a08f0) Reply frame received for 3 I0814 14:48:58.354082 10 log.go:172] (0x40018a08f0) (0x40027cce60) Create stream I0814 14:48:58.354140 10 log.go:172] (0x40018a08f0) (0x40027cce60) Stream added, broadcasting: 5 I0814 14:48:58.355266 10 log.go:172] (0x40018a08f0) Reply frame received for 5 I0814 14:48:58.437184 10 log.go:172] (0x40018a08f0) Data frame received for 5 I0814 14:48:58.437316 10 log.go:172] (0x40027cce60) (5) Data frame handling I0814 14:48:58.437453 10 log.go:172] (0x40018a08f0) Data frame received for 3 I0814 14:48:58.437595 10 log.go:172] (0x40027ccdc0) (3) Data frame handling I0814 14:48:58.437737 10 log.go:172] (0x40027ccdc0) (3) Data frame sent I0814 14:48:58.437875 10 log.go:172] (0x40018a08f0) Data frame received for 3 I0814 14:48:58.437985 10 log.go:172] (0x40027ccdc0) (3) Data frame handling I0814 14:48:58.438810 10 log.go:172] (0x40018a08f0) Data frame received for 1 I0814 14:48:58.438876 10 log.go:172] (0x4001e870e0) (1) Data frame handling I0814 14:48:58.438943 10 log.go:172] (0x4001e870e0) (1) Data frame sent I0814 14:48:58.439141 10 log.go:172] (0x40018a08f0) (0x4001e870e0) Stream removed, broadcasting: 1 I0814 14:48:58.439433 10 log.go:172] (0x40018a08f0) Go away received I0814 14:48:58.439533 10 log.go:172] (0x40018a08f0) (0x4001e870e0) Stream removed, broadcasting: 1 I0814 14:48:58.439609 10 log.go:172] (0x40018a08f0) (0x40027ccdc0) Stream removed, broadcasting: 3 I0814 14:48:58.439673 10 log.go:172] (0x40018a08f0) (0x40027cce60) Stream removed, broadcasting: 5 Aug 14 14:48:58.439: INFO: Exec stderr: "" Aug 14 14:48:58.439: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5462 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:48:58.439: INFO: >>> kubeConfig: /root/.kube/config I0814 14:48:58.494919 10 log.go:172] (0x4002423ef0) (0x40027cd0e0) Create stream I0814 14:48:58.495087 10 log.go:172] (0x4002423ef0) (0x40027cd0e0) Stream added, broadcasting: 1 I0814 14:48:58.499069 10 log.go:172] (0x4002423ef0) Reply frame received for 1 I0814 14:48:58.499284 10 log.go:172] (0x4002423ef0) (0x40027cd180) Create stream I0814 14:48:58.499396 10 log.go:172] (0x4002423ef0) (0x40027cd180) Stream added, broadcasting: 3 I0814 14:48:58.501194 10 log.go:172] (0x4002423ef0) Reply frame received for 3 I0814 14:48:58.501355 10 log.go:172] (0x4002423ef0) (0x40027cd220) Create stream I0814 14:48:58.501450 10 log.go:172] (0x4002423ef0) (0x40027cd220) Stream added, broadcasting: 5 I0814 14:48:58.503127 10 log.go:172] (0x4002423ef0) Reply frame received for 5 I0814 14:48:58.579672 10 log.go:172] (0x4002423ef0) Data frame received for 3 I0814 14:48:58.579913 10 log.go:172] (0x40027cd180) (3) Data frame handling I0814 14:48:58.580042 10 log.go:172] (0x40027cd180) (3) Data frame sent I0814 14:48:58.580135 10 log.go:172] (0x4002423ef0) Data frame received for 3 I0814 14:48:58.580198 10 log.go:172] (0x40027cd180) (3) Data frame handling I0814 14:48:58.580308 10 log.go:172] (0x4002423ef0) Data frame received for 5 I0814 14:48:58.580387 10 log.go:172] (0x40027cd220) (5) Data frame handling I0814 14:48:58.580959 10 log.go:172] (0x4002423ef0) Data frame received for 1 I0814 14:48:58.581028 10 log.go:172] (0x40027cd0e0) (1) Data frame handling I0814 14:48:58.581096 10 log.go:172] (0x40027cd0e0) (1) Data frame sent I0814 14:48:58.581170 10 log.go:172] (0x4002423ef0) (0x40027cd0e0) Stream removed, broadcasting: 1 I0814 14:48:58.581270 10 log.go:172] (0x4002423ef0) Go away received I0814 14:48:58.581926 10 log.go:172] (0x4002423ef0) (0x40027cd0e0) Stream removed, broadcasting: 1 I0814 14:48:58.582107 10 log.go:172] (0x4002423ef0) (0x40027cd180) Stream removed, broadcasting: 3 I0814 14:48:58.582435 10 log.go:172] (0x4002423ef0) (0x40027cd220) Stream removed, broadcasting: 5 Aug 14 14:48:58.582: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:48:58.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5462" for this suite. • [SLOW TEST:36.917 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1537,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:48:58.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:49:09.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3019" for this suite. • [SLOW TEST:11.128 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1541,"failed":0} [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:49:09.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Aug 14 14:49:12.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config api-versions' Aug 14 14:49:14.964: INFO: stderr: "" Aug 14 14:49:14.964: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:49:14.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4494" for this suite. • [SLOW TEST:5.260 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:716 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":88,"skipped":1541,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:49:14.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 14:49:16.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63f72840-9083-4258-b984-d50c7a72d714" in namespace "projected-1207" to be "Succeeded or Failed" Aug 14 14:49:16.797: INFO: Pod "downwardapi-volume-63f72840-9083-4258-b984-d50c7a72d714": Phase="Pending", Reason="", readiness=false. Elapsed: 160.605187ms Aug 14 14:49:19.215: INFO: Pod "downwardapi-volume-63f72840-9083-4258-b984-d50c7a72d714": Phase="Pending", Reason="", readiness=false. Elapsed: 2.578657765s Aug 14 14:49:21.572: INFO: Pod "downwardapi-volume-63f72840-9083-4258-b984-d50c7a72d714": Phase="Pending", Reason="", readiness=false. Elapsed: 4.935341236s Aug 14 14:49:23.680: INFO: Pod "downwardapi-volume-63f72840-9083-4258-b984-d50c7a72d714": Phase="Pending", Reason="", readiness=false. Elapsed: 7.043441823s Aug 14 14:49:25.787: INFO: Pod "downwardapi-volume-63f72840-9083-4258-b984-d50c7a72d714": Phase="Pending", Reason="", readiness=false. Elapsed: 9.151099005s Aug 14 14:49:28.204: INFO: Pod "downwardapi-volume-63f72840-9083-4258-b984-d50c7a72d714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.568019115s STEP: Saw pod success Aug 14 14:49:28.205: INFO: Pod "downwardapi-volume-63f72840-9083-4258-b984-d50c7a72d714" satisfied condition "Succeeded or Failed" Aug 14 14:49:28.210: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-63f72840-9083-4258-b984-d50c7a72d714 container client-container: STEP: delete the pod Aug 14 14:49:28.440: INFO: Waiting for pod downwardapi-volume-63f72840-9083-4258-b984-d50c7a72d714 to disappear Aug 14 14:49:28.455: INFO: Pod downwardapi-volume-63f72840-9083-4258-b984-d50c7a72d714 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:49:28.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1207" for this suite. • [SLOW TEST:13.482 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1546,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:49:28.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Aug 14 14:49:35.577: INFO: Successfully updated pod "adopt-release-9xlsj" STEP: Checking that the Job readopts the Pod Aug 14 14:49:35.578: INFO: Waiting up to 15m0s for pod "adopt-release-9xlsj" in namespace "job-2144" to be "adopted" Aug 14 14:49:35.644: INFO: Pod "adopt-release-9xlsj": Phase="Running", Reason="", readiness=true. Elapsed: 65.606117ms Aug 14 14:49:37.652: INFO: Pod "adopt-release-9xlsj": Phase="Running", Reason="", readiness=true. Elapsed: 2.073381037s Aug 14 14:49:37.652: INFO: Pod "adopt-release-9xlsj" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Aug 14 14:49:38.172: INFO: Successfully updated pod "adopt-release-9xlsj" STEP: Checking that the Job releases the Pod Aug 14 14:49:38.173: INFO: Waiting up to 15m0s for pod "adopt-release-9xlsj" in namespace "job-2144" to be "released" Aug 14 14:49:38.206: INFO: Pod "adopt-release-9xlsj": Phase="Running", Reason="", readiness=true. Elapsed: 33.353525ms Aug 14 14:49:40.419: INFO: Pod "adopt-release-9xlsj": Phase="Running", Reason="", readiness=true. Elapsed: 2.245905833s Aug 14 14:49:40.419: INFO: Pod "adopt-release-9xlsj" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:49:40.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2144" for this suite. • [SLOW TEST:12.372 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":90,"skipped":1548,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:49:40.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-3589 STEP: creating replication controller nodeport-test in namespace services-3589 I0814 14:49:42.751846 10 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-3589, replica count: 2 I0814 14:49:45.803278 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:49:48.804202 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:49:51.805223 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:49:54.805941 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 14 14:49:54.806: INFO: Creating new exec pod Aug 14 14:50:07.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3589 execpodcv77m -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Aug 14 14:50:09.204: INFO: stderr: "I0814 14:50:09.103206 717 log.go:172] (0x40009a8000) (0x40008072c0) Create stream\nI0814 14:50:09.106335 717 log.go:172] (0x40009a8000) (0x40008072c0) Stream added, broadcasting: 1\nI0814 14:50:09.118378 717 log.go:172] (0x40009a8000) Reply frame received for 1\nI0814 14:50:09.119232 717 log.go:172] (0x40009a8000) (0x4000524000) Create stream\nI0814 14:50:09.119297 717 log.go:172] (0x40009a8000) (0x4000524000) Stream added, broadcasting: 3\nI0814 14:50:09.120674 717 log.go:172] (0x40009a8000) Reply frame received for 3\nI0814 14:50:09.120960 717 log.go:172] (0x40009a8000) (0x400054c000) Create stream\nI0814 14:50:09.121027 717 log.go:172] (0x40009a8000) (0x400054c000) Stream added, broadcasting: 5\nI0814 14:50:09.122253 717 log.go:172] (0x40009a8000) Reply frame received for 5\nI0814 14:50:09.182359 717 log.go:172] (0x40009a8000) Data frame received for 5\nI0814 14:50:09.182814 717 log.go:172] (0x400054c000) (5) Data frame handling\nI0814 14:50:09.183736 717 log.go:172] (0x40009a8000) Data frame received for 3\nI0814 14:50:09.183942 717 log.go:172] (0x4000524000) (3) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nI0814 14:50:09.184042 717 log.go:172] (0x40009a8000) Data frame received for 1\nI0814 14:50:09.186268 717 log.go:172] (0x40008072c0) (1) Data frame handling\nI0814 14:50:09.187767 717 log.go:172] (0x400054c000) (5) Data frame sent\nI0814 14:50:09.189297 717 log.go:172] (0x40008072c0) (1) Data frame sent\nI0814 14:50:09.189508 717 log.go:172] (0x40009a8000) Data frame received for 5\nI0814 14:50:09.189614 717 log.go:172] (0x400054c000) (5) Data frame handling\nI0814 14:50:09.189730 717 log.go:172] (0x400054c000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0814 14:50:09.189825 717 log.go:172] (0x40009a8000) Data frame received for 5\nI0814 14:50:09.189905 717 log.go:172] (0x400054c000) (5) Data frame handling\nI0814 14:50:09.190787 717 log.go:172] (0x40009a8000) (0x40008072c0) Stream removed, broadcasting: 1\nI0814 14:50:09.191435 717 log.go:172] (0x40009a8000) Go away received\nI0814 14:50:09.195241 717 log.go:172] (0x40009a8000) (0x40008072c0) Stream removed, broadcasting: 1\nI0814 14:50:09.195526 717 log.go:172] (0x40009a8000) (0x4000524000) Stream removed, broadcasting: 3\nI0814 14:50:09.195709 717 log.go:172] (0x40009a8000) (0x400054c000) Stream removed, broadcasting: 5\n" Aug 14 14:50:09.205: INFO: stdout: "" Aug 14 14:50:09.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3589 execpodcv77m -- /bin/sh -x -c nc -zv -t -w 2 10.101.115.101 80' Aug 14 14:50:11.086: INFO: stderr: "I0814 14:50:10.968581 741 log.go:172] (0x40007d4e70) (0x4000972140) Create stream\nI0814 14:50:10.973531 741 log.go:172] (0x40007d4e70) (0x4000972140) Stream added, broadcasting: 1\nI0814 14:50:10.987396 741 log.go:172] (0x40007d4e70) Reply frame received for 1\nI0814 14:50:10.988646 741 log.go:172] (0x40007d4e70) (0x400080d360) Create stream\nI0814 14:50:10.988926 741 log.go:172] (0x40007d4e70) (0x400080d360) Stream added, broadcasting: 3\nI0814 14:50:10.991542 741 log.go:172] (0x40007d4e70) Reply frame received for 3\nI0814 14:50:10.992063 741 log.go:172] (0x40007d4e70) (0x4000966000) Create stream\nI0814 14:50:10.992157 741 log.go:172] (0x40007d4e70) (0x4000966000) Stream added, broadcasting: 5\nI0814 14:50:10.993791 741 log.go:172] (0x40007d4e70) Reply frame received for 5\nI0814 14:50:11.065826 741 log.go:172] (0x40007d4e70) Data frame received for 3\nI0814 14:50:11.066142 741 log.go:172] (0x40007d4e70) Data frame received for 1\nI0814 14:50:11.066533 741 log.go:172] (0x400080d360) (3) Data frame handling\nI0814 14:50:11.067039 741 log.go:172] (0x40007d4e70) Data frame received for 5\nI0814 14:50:11.067403 741 log.go:172] (0x4000966000) (5) Data frame handling\nI0814 14:50:11.067634 741 log.go:172] (0x4000972140) (1) Data frame handling\n+ nc -zv -t -w 2 10.101.115.101 80\nConnection to 10.101.115.101 80 port [tcp/http] succeeded!\nI0814 14:50:11.070505 741 log.go:172] (0x4000966000) (5) Data frame sent\nI0814 14:50:11.070663 741 log.go:172] (0x4000972140) (1) Data frame sent\nI0814 14:50:11.071492 741 log.go:172] (0x40007d4e70) Data frame received for 5\nI0814 14:50:11.071601 741 log.go:172] (0x4000966000) (5) Data frame handling\nI0814 14:50:11.071872 741 log.go:172] (0x40007d4e70) (0x4000972140) Stream removed, broadcasting: 1\nI0814 14:50:11.073203 741 log.go:172] (0x40007d4e70) Go away received\nI0814 14:50:11.076563 741 log.go:172] (0x40007d4e70) (0x4000972140) Stream removed, broadcasting: 1\nI0814 14:50:11.077007 741 log.go:172] (0x40007d4e70) (0x400080d360) Stream removed, broadcasting: 3\nI0814 14:50:11.077229 741 log.go:172] (0x40007d4e70) (0x4000966000) Stream removed, broadcasting: 5\n" Aug 14 14:50:11.087: INFO: stdout: "" Aug 14 14:50:11.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3589 execpodcv77m -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31176' Aug 14 14:50:12.785: INFO: stderr: "I0814 14:50:12.678259 764 log.go:172] (0x400003ad10) (0x400066e320) Create stream\nI0814 14:50:12.682741 764 log.go:172] (0x400003ad10) (0x400066e320) Stream added, broadcasting: 1\nI0814 14:50:12.695448 764 log.go:172] (0x400003ad10) Reply frame received for 1\nI0814 14:50:12.696194 764 log.go:172] (0x400003ad10) (0x4000736000) Create stream\nI0814 14:50:12.696267 764 log.go:172] (0x400003ad10) (0x4000736000) Stream added, broadcasting: 3\nI0814 14:50:12.697735 764 log.go:172] (0x400003ad10) Reply frame received for 3\nI0814 14:50:12.698034 764 log.go:172] (0x400003ad10) (0x400073a000) Create stream\nI0814 14:50:12.698105 764 log.go:172] (0x400003ad10) (0x400073a000) Stream added, broadcasting: 5\nI0814 14:50:12.699305 764 log.go:172] (0x400003ad10) Reply frame received for 5\nI0814 14:50:12.761472 764 log.go:172] (0x400003ad10) Data frame received for 3\nI0814 14:50:12.761816 764 log.go:172] (0x400003ad10) Data frame received for 5\nI0814 14:50:12.762027 764 log.go:172] (0x400073a000) (5) Data frame handling\nI0814 14:50:12.762271 764 log.go:172] (0x4000736000) (3) Data frame handling\nI0814 14:50:12.762410 764 log.go:172] (0x400003ad10) Data frame received for 1\nI0814 14:50:12.762509 764 log.go:172] (0x400066e320) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 31176\nI0814 14:50:12.765028 764 log.go:172] (0x400066e320) (1) Data frame sent\nI0814 14:50:12.765261 764 log.go:172] (0x400073a000) (5) Data frame sent\nI0814 14:50:12.765370 764 log.go:172] (0x400003ad10) Data frame received for 5\nI0814 14:50:12.765448 764 log.go:172] (0x400073a000) (5) Data frame handling\nI0814 14:50:12.765534 764 log.go:172] (0x400073a000) (5) Data frame sent\nConnection to 172.18.0.13 31176 port [tcp/31176] succeeded!\nI0814 14:50:12.765635 764 log.go:172] (0x400003ad10) Data frame received for 5\nI0814 14:50:12.765715 764 log.go:172] (0x400073a000) (5) Data frame handling\nI0814 14:50:12.766660 764 log.go:172] (0x400003ad10) (0x400066e320) Stream removed, broadcasting: 1\nI0814 14:50:12.769210 764 log.go:172] (0x400003ad10) Go away received\nI0814 14:50:12.773485 764 log.go:172] (0x400003ad10) (0x400066e320) Stream removed, broadcasting: 1\nI0814 14:50:12.774167 764 log.go:172] (0x400003ad10) (0x4000736000) Stream removed, broadcasting: 3\nI0814 14:50:12.774416 764 log.go:172] (0x400003ad10) (0x400073a000) Stream removed, broadcasting: 5\n" Aug 14 14:50:12.786: INFO: stdout: "" Aug 14 14:50:12.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3589 execpodcv77m -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31176' Aug 14 14:50:14.292: INFO: stderr: "I0814 14:50:14.166472 788 log.go:172] (0x40007f6420) (0x4000984140) Create stream\nI0814 14:50:14.169356 788 log.go:172] (0x40007f6420) (0x4000984140) Stream added, broadcasting: 1\nI0814 14:50:14.184523 788 log.go:172] (0x40007f6420) Reply frame received for 1\nI0814 14:50:14.185427 788 log.go:172] (0x40007f6420) (0x40009841e0) Create stream\nI0814 14:50:14.185494 788 log.go:172] (0x40007f6420) (0x40009841e0) Stream added, broadcasting: 3\nI0814 14:50:14.187065 788 log.go:172] (0x40007f6420) Reply frame received for 3\nI0814 14:50:14.187430 788 log.go:172] (0x40007f6420) (0x400082b360) Create stream\nI0814 14:50:14.187509 788 log.go:172] (0x40007f6420) (0x400082b360) Stream added, broadcasting: 5\nI0814 14:50:14.188688 788 log.go:172] (0x40007f6420) Reply frame received for 5\nI0814 14:50:14.274154 788 log.go:172] (0x40007f6420) Data frame received for 3\nI0814 14:50:14.274401 788 log.go:172] (0x40009841e0) (3) Data frame handling\nI0814 14:50:14.274972 788 log.go:172] (0x40007f6420) Data frame received for 5\nI0814 14:50:14.275097 788 log.go:172] (0x400082b360) (5) Data frame handling\nI0814 14:50:14.275268 788 log.go:172] (0x40007f6420) Data frame received for 1\nI0814 14:50:14.275365 788 log.go:172] (0x4000984140) (1) Data frame handling\nI0814 14:50:14.276681 788 log.go:172] (0x400082b360) (5) Data frame sent\nI0814 14:50:14.277005 788 log.go:172] (0x4000984140) (1) Data frame sent\nI0814 14:50:14.277219 788 log.go:172] (0x40007f6420) Data frame received for 5\nI0814 14:50:14.277305 788 log.go:172] (0x400082b360) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 31176\nConnection to 172.18.0.15 31176 port [tcp/31176] succeeded!\nI0814 14:50:14.278857 788 log.go:172] (0x40007f6420) (0x4000984140) Stream removed, broadcasting: 1\nI0814 14:50:14.280813 788 log.go:172] (0x40007f6420) Go away received\nI0814 14:50:14.283016 788 log.go:172] (0x40007f6420) (0x4000984140) Stream removed, broadcasting: 1\nI0814 14:50:14.283259 788 log.go:172] (0x40007f6420) (0x40009841e0) Stream removed, broadcasting: 3\nI0814 14:50:14.283472 788 log.go:172] (0x40007f6420) (0x400082b360) Stream removed, broadcasting: 5\n" Aug 14 14:50:14.293: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:50:14.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3589" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:34.495 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":91,"skipped":1580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:50:15.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-08012364-f584-408b-8600-b8d96dbfc399 STEP: Creating a pod to test consume configMaps Aug 14 14:50:16.667: INFO: Waiting up to 5m0s for pod "pod-configmaps-063b81dc-afbc-446c-8b73-711118b880f6" in namespace "configmap-5006" to be "Succeeded or Failed" Aug 14 14:50:16.801: INFO: Pod "pod-configmaps-063b81dc-afbc-446c-8b73-711118b880f6": Phase="Pending", Reason="", readiness=false. Elapsed: 133.758478ms Aug 14 14:50:18.977: INFO: Pod "pod-configmaps-063b81dc-afbc-446c-8b73-711118b880f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309522146s Aug 14 14:50:21.536: INFO: Pod "pod-configmaps-063b81dc-afbc-446c-8b73-711118b880f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.868712119s Aug 14 14:50:23.841: INFO: Pod "pod-configmaps-063b81dc-afbc-446c-8b73-711118b880f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.173969865s Aug 14 14:50:25.994: INFO: Pod "pod-configmaps-063b81dc-afbc-446c-8b73-711118b880f6": Phase="Running", Reason="", readiness=true. Elapsed: 9.327185265s Aug 14 14:50:28.260: INFO: Pod "pod-configmaps-063b81dc-afbc-446c-8b73-711118b880f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.592753494s STEP: Saw pod success Aug 14 14:50:28.260: INFO: Pod "pod-configmaps-063b81dc-afbc-446c-8b73-711118b880f6" satisfied condition "Succeeded or Failed" Aug 14 14:50:28.627: INFO: Trying to get logs from node kali-worker pod pod-configmaps-063b81dc-afbc-446c-8b73-711118b880f6 container configmap-volume-test: STEP: delete the pod Aug 14 14:50:30.047: INFO: Waiting for pod pod-configmaps-063b81dc-afbc-446c-8b73-711118b880f6 to disappear Aug 14 14:50:30.098: INFO: Pod pod-configmaps-063b81dc-afbc-446c-8b73-711118b880f6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:50:30.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5006" for this suite. • [SLOW TEST:14.772 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1612,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:50:30.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1089, will wait for the garbage collector to delete the pods Aug 14 14:50:42.970: INFO: Deleting Job.batch foo took: 9.05433ms Aug 14 14:50:43.271: INFO: Terminating Job.batch foo pods took: 300.912361ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:51:23.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1089" for this suite. • [SLOW TEST:53.470 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":93,"skipped":1640,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:51:23.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 14:51:23.721: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27e44e05-e66c-4e11-a837-eb86bf66183b" in namespace "projected-7766" to be "Succeeded or Failed" Aug 14 14:51:23.771: INFO: Pod "downwardapi-volume-27e44e05-e66c-4e11-a837-eb86bf66183b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.487184ms Aug 14 14:51:25.823: INFO: Pod "downwardapi-volume-27e44e05-e66c-4e11-a837-eb86bf66183b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101473011s Aug 14 14:51:27.879: INFO: Pod "downwardapi-volume-27e44e05-e66c-4e11-a837-eb86bf66183b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157070815s STEP: Saw pod success Aug 14 14:51:27.879: INFO: Pod "downwardapi-volume-27e44e05-e66c-4e11-a837-eb86bf66183b" satisfied condition "Succeeded or Failed" Aug 14 14:51:28.171: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-27e44e05-e66c-4e11-a837-eb86bf66183b container client-container: STEP: delete the pod Aug 14 14:51:28.359: INFO: Waiting for pod downwardapi-volume-27e44e05-e66c-4e11-a837-eb86bf66183b to disappear Aug 14 14:51:28.383: INFO: Pod downwardapi-volume-27e44e05-e66c-4e11-a837-eb86bf66183b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:51:28.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7766" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1658,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:51:28.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 14:51:30.087: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 14:51:32.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013490, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013490, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013490, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013490, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 14:51:34.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013490, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013490, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013490, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733013490, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 14:51:37.186: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:51:37.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1832-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:51:38.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2904" for this suite. STEP: Destroying namespace "webhook-2904-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.795 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":95,"skipped":1673,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:51:39.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:51:39.411: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9b07a562-f061-442d-91bb-8e643e750e51" in namespace "security-context-test-5418" to be "Succeeded or Failed" Aug 14 14:51:39.775: INFO: Pod "alpine-nnp-false-9b07a562-f061-442d-91bb-8e643e750e51": Phase="Pending", Reason="", readiness=false. Elapsed: 363.550253ms Aug 14 14:51:41.782: INFO: Pod "alpine-nnp-false-9b07a562-f061-442d-91bb-8e643e750e51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370968666s Aug 14 14:51:43.794: INFO: Pod "alpine-nnp-false-9b07a562-f061-442d-91bb-8e643e750e51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.383127139s Aug 14 14:51:45.805: INFO: Pod "alpine-nnp-false-9b07a562-f061-442d-91bb-8e643e750e51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.394075098s Aug 14 14:51:45.806: INFO: Pod "alpine-nnp-false-9b07a562-f061-442d-91bb-8e643e750e51" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:51:45.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5418" for this suite. • [SLOW TEST:6.609 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1679,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:51:45.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-3410 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 14 14:51:46.485: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 14 14:51:47.031: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 14 14:51:49.141: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 14 14:51:51.348: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 14 14:51:53.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:51:55.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:51:57.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:51:59.038: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:52:01.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:52:03.038: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:52:05.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:52:07.064: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:52:09.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 14:52:11.043: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 14 14:52:11.051: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 14 14:52:21.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.32:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3410 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:52:21.124: INFO: >>> kubeConfig: /root/.kube/config I0814 14:52:21.186726 10 log.go:172] (0x40024236b0) (0x4001edc280) Create stream I0814 14:52:21.186940 10 log.go:172] (0x40024236b0) (0x4001edc280) Stream added, broadcasting: 1 I0814 14:52:21.191407 10 log.go:172] (0x40024236b0) Reply frame received for 1 I0814 14:52:21.191706 10 log.go:172] (0x40024236b0) (0x40011c4140) Create stream I0814 14:52:21.191888 10 log.go:172] (0x40024236b0) (0x40011c4140) Stream added, broadcasting: 3 I0814 14:52:21.193881 10 log.go:172] (0x40024236b0) Reply frame received for 3 I0814 14:52:21.194039 10 log.go:172] (0x40024236b0) (0x4001edc460) Create stream I0814 14:52:21.194115 10 log.go:172] (0x40024236b0) (0x4001edc460) Stream added, broadcasting: 5 I0814 14:52:21.195396 10 log.go:172] (0x40024236b0) Reply frame received for 5 I0814 14:52:21.262960 10 log.go:172] (0x40024236b0) Data frame received for 3 I0814 14:52:21.263271 10 log.go:172] (0x40011c4140) (3) Data frame handling I0814 14:52:21.263464 10 log.go:172] (0x40011c4140) (3) Data frame sent I0814 14:52:21.263631 10 log.go:172] (0x40024236b0) Data frame received for 3 I0814 14:52:21.263789 10 log.go:172] (0x40011c4140) (3) Data frame handling I0814 14:52:21.263995 10 log.go:172] (0x40024236b0) Data frame received for 5 I0814 14:52:21.264207 10 log.go:172] (0x4001edc460) (5) Data frame handling I0814 14:52:21.264901 10 log.go:172] (0x40024236b0) Data frame received for 1 I0814 14:52:21.265055 10 log.go:172] (0x4001edc280) (1) Data frame handling I0814 14:52:21.265171 10 log.go:172] (0x4001edc280) (1) Data frame sent I0814 14:52:21.265335 10 log.go:172] (0x40024236b0) (0x4001edc280) Stream removed, broadcasting: 1 I0814 14:52:21.265509 10 log.go:172] (0x40024236b0) Go away received I0814 14:52:21.265833 10 log.go:172] (0x40024236b0) (0x4001edc280) Stream removed, broadcasting: 1 I0814 14:52:21.266011 10 log.go:172] (0x40024236b0) (0x40011c4140) Stream removed, broadcasting: 3 I0814 14:52:21.266206 10 log.go:172] (0x40024236b0) (0x4001edc460) Stream removed, broadcasting: 5 Aug 14 14:52:21.266: INFO: Found all expected endpoints: [netserver-0] Aug 14 14:52:21.273: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.60:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3410 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 14:52:21.273: INFO: >>> kubeConfig: /root/.kube/config I0814 14:52:21.341262 10 log.go:172] (0x4002423c30) (0x4001edc6e0) Create stream I0814 14:52:21.341535 10 log.go:172] (0x4002423c30) (0x4001edc6e0) Stream added, broadcasting: 1 I0814 14:52:21.347748 10 log.go:172] (0x4002423c30) Reply frame received for 1 I0814 14:52:21.348041 10 log.go:172] (0x4002423c30) (0x4001e08000) Create stream I0814 14:52:21.348169 10 log.go:172] (0x4002423c30) (0x4001e08000) Stream added, broadcasting: 3 I0814 14:52:21.350117 10 log.go:172] (0x4002423c30) Reply frame received for 3 I0814 14:52:21.350275 10 log.go:172] (0x4002423c30) (0x40027cd040) Create stream I0814 14:52:21.350345 10 log.go:172] (0x4002423c30) (0x40027cd040) Stream added, broadcasting: 5 I0814 14:52:21.351990 10 log.go:172] (0x4002423c30) Reply frame received for 5 I0814 14:52:21.413606 10 log.go:172] (0x4002423c30) Data frame received for 5 I0814 14:52:21.413793 10 log.go:172] (0x40027cd040) (5) Data frame handling I0814 14:52:21.413986 10 log.go:172] (0x4002423c30) Data frame received for 3 I0814 14:52:21.414233 10 log.go:172] (0x4001e08000) (3) Data frame handling I0814 14:52:21.414417 10 log.go:172] (0x4001e08000) (3) Data frame sent I0814 14:52:21.414626 10 log.go:172] (0x4002423c30) Data frame received for 3 I0814 14:52:21.414805 10 log.go:172] (0x4001e08000) (3) Data frame handling I0814 14:52:21.415000 10 log.go:172] (0x4002423c30) Data frame received for 1 I0814 14:52:21.415099 10 log.go:172] (0x4001edc6e0) (1) Data frame handling I0814 14:52:21.415219 10 log.go:172] (0x4001edc6e0) (1) Data frame sent I0814 14:52:21.415325 10 log.go:172] (0x4002423c30) (0x4001edc6e0) Stream removed, broadcasting: 1 I0814 14:52:21.415444 10 log.go:172] (0x4002423c30) Go away received I0814 14:52:21.415874 10 log.go:172] (0x4002423c30) (0x4001edc6e0) Stream removed, broadcasting: 1 I0814 14:52:21.415999 10 log.go:172] (0x4002423c30) (0x4001e08000) Stream removed, broadcasting: 3 I0814 14:52:21.416097 10 log.go:172] (0x4002423c30) (0x40027cd040) Stream removed, broadcasting: 5 Aug 14 14:52:21.416: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:52:21.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3410" for this suite. • [SLOW TEST:35.606 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1679,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:52:21.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 14:52:21.490: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 14 14:52:41.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7052 create -f -' Aug 14 14:52:46.256: INFO: stderr: "" Aug 14 14:52:46.257: INFO: stdout: "e2e-test-crd-publish-openapi-848-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 14 14:52:46.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7052 delete e2e-test-crd-publish-openapi-848-crds test-cr' Aug 14 14:52:47.509: INFO: stderr: "" Aug 14 14:52:47.509: INFO: stdout: "e2e-test-crd-publish-openapi-848-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 14 14:52:47.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7052 apply -f -' Aug 14 14:52:49.409: INFO: stderr: "" Aug 14 14:52:49.409: INFO: stdout: "e2e-test-crd-publish-openapi-848-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 14 14:52:49.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7052 delete e2e-test-crd-publish-openapi-848-crds test-cr' Aug 14 14:52:50.672: INFO: stderr: "" Aug 14 14:52:50.673: INFO: stdout: "e2e-test-crd-publish-openapi-848-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 14 14:52:50.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-848-crds' Aug 14 14:52:52.799: INFO: stderr: "" Aug 14 14:52:52.799: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-848-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:53:02.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7052" for this suite. • [SLOW TEST:41.812 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":98,"skipped":1698,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:53:03.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-rs76d in namespace proxy-9317 I0814 14:53:05.204335 10 runners.go:190] Created replication controller with name: proxy-service-rs76d, namespace: proxy-9317, replica count: 1 I0814 14:53:06.256133 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:53:07.256943 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:53:08.257606 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:53:09.258233 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:53:10.258969 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:53:11.259778 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:53:12.260351 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:53:13.261166 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:53:14.261850 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:53:15.262640 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 14:53:16.263560 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0814 14:53:17.264157 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0814 14:53:18.264715 10 runners.go:190] proxy-service-rs76d Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 14 14:53:18.435: INFO: setup took 14.060931164s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 14 14:53:18.449: INFO: (0) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 11.491193ms) Aug 14 14:53:18.450: INFO: (0) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 14.30226ms) Aug 14 14:53:18.450: INFO: (0) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 14.4454ms) Aug 14 14:53:18.450: INFO: (0) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 14.57271ms) Aug 14 14:53:18.450: INFO: (0) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 13.862955ms) Aug 14 14:53:18.450: INFO: (0) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 14.157204ms) Aug 14 14:53:18.450: INFO: (0) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 12.823334ms) Aug 14 14:53:18.450: INFO: (0) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 12.989984ms) Aug 14 14:53:18.450: INFO: (0) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 13.998513ms) Aug 14 14:53:18.451: INFO: (0) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 15.049399ms) Aug 14 14:53:18.451: INFO: (0) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 15.470451ms) Aug 14 14:53:18.453: INFO: (0) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 15.346044ms) Aug 14 14:53:18.454: INFO: (0) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 17.169053ms) Aug 14 14:53:18.454: INFO: (0) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 17.590962ms) Aug 14 14:53:18.454: INFO: (0) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 17.600836ms) Aug 14 14:53:18.456: INFO: (0) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test (200; 13.947696ms) Aug 14 14:53:18.472: INFO: (1) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 14.326275ms) Aug 14 14:53:18.472: INFO: (1) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 14.303858ms) Aug 14 14:53:18.472: INFO: (1) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test<... (200; 14.389907ms) Aug 14 14:53:18.472: INFO: (1) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 14.914151ms) Aug 14 14:53:18.472: INFO: (1) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 14.786306ms) Aug 14 14:53:18.472: INFO: (1) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 15.294877ms) Aug 14 14:53:18.473: INFO: (1) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 15.800014ms) Aug 14 14:53:18.473: INFO: (1) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 15.938176ms) Aug 14 14:53:18.473: INFO: (1) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 15.388854ms) Aug 14 14:53:18.473: INFO: (1) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 16.208504ms) Aug 14 14:53:18.473: INFO: (1) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 15.203476ms) Aug 14 14:53:18.473: INFO: (1) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 15.966225ms) Aug 14 14:53:18.481: INFO: (2) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 6.936188ms) Aug 14 14:53:18.482: INFO: (2) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 7.106143ms) Aug 14 14:53:18.482: INFO: (2) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 7.485345ms) Aug 14 14:53:18.482: INFO: (2) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 8.78611ms) Aug 14 14:53:18.482: INFO: (2) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 7.567448ms) Aug 14 14:53:18.482: INFO: (2) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 8.067ms) Aug 14 14:53:18.482: INFO: (2) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 9.066905ms) Aug 14 14:53:18.483: INFO: (2) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 8.881778ms) Aug 14 14:53:18.482: INFO: (2) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 8.690869ms) Aug 14 14:53:18.483: INFO: (2) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 8.424251ms) Aug 14 14:53:18.485: INFO: (2) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: ... (200; 14.802206ms) Aug 14 14:53:18.488: INFO: (2) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 14.750746ms) Aug 14 14:53:18.496: INFO: (3) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 6.898698ms) Aug 14 14:53:18.496: INFO: (3) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 6.573707ms) Aug 14 14:53:18.496: INFO: (3) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 6.560662ms) Aug 14 14:53:18.496: INFO: (3) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 7.233396ms) Aug 14 14:53:18.496: INFO: (3) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 6.682253ms) Aug 14 14:53:18.496: INFO: (3) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 6.742714ms) Aug 14 14:53:18.496: INFO: (3) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 6.689004ms) Aug 14 14:53:18.496: INFO: (3) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 7.06493ms) Aug 14 14:53:18.496: INFO: (3) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: ... (200; 15.186509ms) Aug 14 14:53:18.515: INFO: (4) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 16.1397ms) Aug 14 14:53:18.515: INFO: (4) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 15.160768ms) Aug 14 14:53:18.515: INFO: (4) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 13.234413ms) Aug 14 14:53:18.515: INFO: (4) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 16.397023ms) Aug 14 14:53:18.515: INFO: (4) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 14.188985ms) Aug 14 14:53:18.516: INFO: (4) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 14.465942ms) Aug 14 14:53:18.516: INFO: (4) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 15.603768ms) Aug 14 14:53:18.524: INFO: (5) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 6.906087ms) Aug 14 14:53:18.524: INFO: (5) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 7.006534ms) Aug 14 14:53:18.525: INFO: (5) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 7.808433ms) Aug 14 14:53:18.525: INFO: (5) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 7.383678ms) Aug 14 14:53:18.525: INFO: (5) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 7.273074ms) Aug 14 14:53:18.525: INFO: (5) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 7.529336ms) Aug 14 14:53:18.526: INFO: (5) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 8.856611ms) Aug 14 14:53:18.526: INFO: (5) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 8.815954ms) Aug 14 14:53:18.526: INFO: (5) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 8.74244ms) Aug 14 14:53:18.531: INFO: (5) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 13.421834ms) Aug 14 14:53:18.531: INFO: (5) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 14.193566ms) Aug 14 14:53:18.531: INFO: (5) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 14.092672ms) Aug 14 14:53:18.531: INFO: (5) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 13.91844ms) Aug 14 14:53:18.531: INFO: (5) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 13.275765ms) Aug 14 14:53:18.531: INFO: (5) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test<... (200; 4.100381ms) Aug 14 14:53:18.600: INFO: (6) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 4.208503ms) Aug 14 14:53:18.601: INFO: (6) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 5.020416ms) Aug 14 14:53:18.601: INFO: (6) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 5.34059ms) Aug 14 14:53:18.602: INFO: (6) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 5.554852ms) Aug 14 14:53:18.602: INFO: (6) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 5.49387ms) Aug 14 14:53:18.602: INFO: (6) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 5.649224ms) Aug 14 14:53:18.603: INFO: (6) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test (200; 6.339701ms) Aug 14 14:53:18.611: INFO: (7) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 6.240236ms) Aug 14 14:53:18.612: INFO: (7) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 6.4858ms) Aug 14 14:53:18.612: INFO: (7) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 6.121065ms) Aug 14 14:53:18.612: INFO: (7) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 6.470753ms) Aug 14 14:53:18.612: INFO: (7) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 6.961108ms) Aug 14 14:53:18.612: INFO: (7) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 7.132443ms) Aug 14 14:53:18.612: INFO: (7) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test (200; 5.140391ms) Aug 14 14:53:18.619: INFO: (8) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 5.402361ms) Aug 14 14:53:18.619: INFO: (8) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 5.443941ms) Aug 14 14:53:18.619: INFO: (8) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 5.710556ms) Aug 14 14:53:18.619: INFO: (8) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 5.729918ms) Aug 14 14:53:18.619: INFO: (8) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 6.020156ms) Aug 14 14:53:18.619: INFO: (8) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 5.869007ms) Aug 14 14:53:18.620: INFO: (8) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 6.261393ms) Aug 14 14:53:18.620: INFO: (8) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 6.436498ms) Aug 14 14:53:18.620: INFO: (8) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 6.476601ms) Aug 14 14:53:18.620: INFO: (8) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 6.486226ms) Aug 14 14:53:18.620: INFO: (8) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test<... (200; 7.651546ms) Aug 14 14:53:18.630: INFO: (9) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 7.603396ms) Aug 14 14:53:18.630: INFO: (9) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 7.740412ms) Aug 14 14:53:18.631: INFO: (9) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 9.024388ms) Aug 14 14:53:18.632: INFO: (9) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 9.256581ms) Aug 14 14:53:18.632: INFO: (9) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 9.691947ms) Aug 14 14:53:18.632: INFO: (9) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 9.708827ms) Aug 14 14:53:18.632: INFO: (9) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 9.937387ms) Aug 14 14:53:18.632: INFO: (9) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 9.695525ms) Aug 14 14:53:18.632: INFO: (9) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 9.887424ms) Aug 14 14:53:18.632: INFO: (9) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 10.162428ms) Aug 14 14:53:18.632: INFO: (9) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 10.31344ms) Aug 14 14:53:18.633: INFO: (9) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: ... (200; 451.346914ms) Aug 14 14:53:19.086: INFO: (10) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test (200; 451.857256ms) Aug 14 14:53:19.087: INFO: (10) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 452.617441ms) Aug 14 14:53:19.087: INFO: (10) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 454.4195ms) Aug 14 14:53:19.087: INFO: (10) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 454.779609ms) Aug 14 14:53:19.088: INFO: (10) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 454.837776ms) Aug 14 14:53:19.088: INFO: (10) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 451.033028ms) Aug 14 14:53:19.089: INFO: (10) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 455.366074ms) Aug 14 14:53:19.089: INFO: (10) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 454.27147ms) Aug 14 14:53:19.089: INFO: (10) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 455.241126ms) Aug 14 14:53:19.514: INFO: (11) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 425.487847ms) Aug 14 14:53:19.515: INFO: (11) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 425.377163ms) Aug 14 14:53:19.515: INFO: (11) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 425.346087ms) Aug 14 14:53:19.515: INFO: (11) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 425.825585ms) Aug 14 14:53:19.515: INFO: (11) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 425.641664ms) Aug 14 14:53:19.515: INFO: (11) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 425.874813ms) Aug 14 14:53:19.515: INFO: (11) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 426.481627ms) Aug 14 14:53:19.516: INFO: (11) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 426.90379ms) Aug 14 14:53:19.516: INFO: (11) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: ... (200; 4.875737ms) Aug 14 14:53:19.522: INFO: (12) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test (200; 4.816364ms) Aug 14 14:53:19.523: INFO: (12) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 5.523745ms) Aug 14 14:53:19.523: INFO: (12) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 5.896191ms) Aug 14 14:53:19.724: INFO: (12) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 205.994205ms) Aug 14 14:53:19.725: INFO: (12) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 207.341088ms) Aug 14 14:53:19.726: INFO: (12) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 207.723345ms) Aug 14 14:53:19.726: INFO: (12) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 208.790087ms) Aug 14 14:53:19.727: INFO: (12) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 208.549435ms) Aug 14 14:53:19.727: INFO: (12) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 209.289638ms) Aug 14 14:53:19.727: INFO: (12) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 208.744202ms) Aug 14 14:53:19.727: INFO: (12) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 209.590192ms) Aug 14 14:53:19.727: INFO: (12) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 209.516376ms) Aug 14 14:53:19.727: INFO: (12) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 209.537825ms) Aug 14 14:53:19.728: INFO: (12) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 209.71531ms) Aug 14 14:53:19.765: INFO: (13) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 36.939782ms) Aug 14 14:53:19.766: INFO: (13) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 37.334871ms) Aug 14 14:53:19.766: INFO: (13) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 37.833749ms) Aug 14 14:53:19.766: INFO: (13) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 38.097513ms) Aug 14 14:53:19.766: INFO: (13) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 38.326209ms) Aug 14 14:53:19.766: INFO: (13) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 38.136842ms) Aug 14 14:53:19.766: INFO: (13) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 38.243282ms) Aug 14 14:53:19.767: INFO: (13) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test (200; 38.120569ms) Aug 14 14:53:19.768: INFO: (13) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 39.653252ms) Aug 14 14:53:19.769: INFO: (13) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 40.788816ms) Aug 14 14:53:19.769: INFO: (13) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 40.5773ms) Aug 14 14:53:19.769: INFO: (13) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 41.272427ms) Aug 14 14:53:19.769: INFO: (13) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 41.352586ms) Aug 14 14:53:19.770: INFO: (13) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 41.71311ms) Aug 14 14:53:19.770: INFO: (13) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 41.805478ms) Aug 14 14:53:19.775: INFO: (14) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test (200; 7.831535ms) Aug 14 14:53:19.778: INFO: (14) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 8.235961ms) Aug 14 14:53:19.779: INFO: (14) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 8.621909ms) Aug 14 14:53:19.779: INFO: (14) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 8.487471ms) Aug 14 14:53:19.779: INFO: (14) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 9.072759ms) Aug 14 14:53:19.779: INFO: (14) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 8.994804ms) Aug 14 14:53:19.779: INFO: (14) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 9.090594ms) Aug 14 14:53:19.779: INFO: (14) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 8.864207ms) Aug 14 14:53:19.779: INFO: (14) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 8.963257ms) Aug 14 14:53:19.780: INFO: (14) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 9.326691ms) Aug 14 14:53:19.780: INFO: (14) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 9.490163ms) Aug 14 14:53:19.781: INFO: (14) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 10.876137ms) Aug 14 14:53:19.781: INFO: (14) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 10.863922ms) Aug 14 14:53:19.781: INFO: (14) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 10.834585ms) Aug 14 14:53:19.787: INFO: (15) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 5.005554ms) Aug 14 14:53:19.788: INFO: (15) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 6.270103ms) Aug 14 14:53:19.789: INFO: (15) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 7.276489ms) Aug 14 14:53:19.789: INFO: (15) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 7.459047ms) Aug 14 14:53:19.789: INFO: (15) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 7.323194ms) Aug 14 14:53:19.789: INFO: (15) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 7.379733ms) Aug 14 14:53:19.789: INFO: (15) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 7.496642ms) Aug 14 14:53:19.790: INFO: (15) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 7.640579ms) Aug 14 14:53:19.790: INFO: (15) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 8.139524ms) Aug 14 14:53:19.790: INFO: (15) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 8.137408ms) Aug 14 14:53:19.790: INFO: (15) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 8.383054ms) Aug 14 14:53:19.790: INFO: (15) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 8.364017ms) Aug 14 14:53:19.790: INFO: (15) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test<... (200; 4.034063ms) Aug 14 14:53:19.797: INFO: (16) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 6.387979ms) Aug 14 14:53:19.800: INFO: (16) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 8.486194ms) Aug 14 14:53:19.800: INFO: (16) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 8.871354ms) Aug 14 14:53:19.800: INFO: (16) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 9.17062ms) Aug 14 14:53:19.800: INFO: (16) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 9.380273ms) Aug 14 14:53:19.800: INFO: (16) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 9.133307ms) Aug 14 14:53:19.800: INFO: (16) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 9.481436ms) Aug 14 14:53:19.802: INFO: (16) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 10.249561ms) Aug 14 14:53:19.802: INFO: (16) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 10.746165ms) Aug 14 14:53:19.802: INFO: (16) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test<... (200; 129.938562ms) Aug 14 14:53:19.935: INFO: (17) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: ... (200; 134.371252ms) Aug 14 14:53:19.938: INFO: (17) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 134.068846ms) Aug 14 14:53:19.938: INFO: (17) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 134.091765ms) Aug 14 14:53:19.938: INFO: (17) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 134.15904ms) Aug 14 14:53:19.939: INFO: (17) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 134.735587ms) Aug 14 14:53:19.939: INFO: (17) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 134.968193ms) Aug 14 14:53:19.939: INFO: (17) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 135.06024ms) Aug 14 14:53:19.939: INFO: (17) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 135.390546ms) Aug 14 14:53:19.944: INFO: (18) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:460/proxy/: tls baz (200; 5.025164ms) Aug 14 14:53:19.945: INFO: (18) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 5.279969ms) Aug 14 14:53:19.947: INFO: (18) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 7.065273ms) Aug 14 14:53:19.947: INFO: (18) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 7.607037ms) Aug 14 14:53:19.947: INFO: (18) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:1080/proxy/: test<... (200; 7.437383ms) Aug 14 14:53:19.993: INFO: (18) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 53.541187ms) Aug 14 14:53:19.995: INFO: (18) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 54.683571ms) Aug 14 14:53:19.995: INFO: (18) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 55.048952ms) Aug 14 14:53:19.996: INFO: (18) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: ... (200; 56.199681ms) Aug 14 14:53:19.996: INFO: (18) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 56.161803ms) Aug 14 14:53:19.996: INFO: (18) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 56.977589ms) Aug 14 14:53:19.997: INFO: (18) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 56.749007ms) Aug 14 14:53:19.997: INFO: (18) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 56.982965ms) Aug 14 14:53:19.997: INFO: (18) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 57.0189ms) Aug 14 14:53:20.004: INFO: (19) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c/proxy/: test (200; 6.118934ms) Aug 14 14:53:20.006: INFO: (19) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:1080/proxy/: ... (200; 8.340401ms) Aug 14 14:53:20.006: INFO: (19) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 8.620849ms) Aug 14 14:53:20.007: INFO: (19) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 8.931134ms) Aug 14 14:53:20.007: INFO: (19) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname2/proxy/: bar (200; 9.33299ms) Aug 14 14:53:20.007: INFO: (19) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:462/proxy/: tls qux (200; 9.558097ms) Aug 14 14:53:20.007: INFO: (19) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname1/proxy/: foo (200; 9.61687ms) Aug 14 14:53:20.008: INFO: (19) /api/v1/namespaces/proxy-9317/pods/proxy-service-rs76d-dqw8c:162/proxy/: bar (200; 10.319134ms) Aug 14 14:53:20.008: INFO: (19) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname2/proxy/: tls qux (200; 9.940992ms) Aug 14 14:53:20.008: INFO: (19) /api/v1/namespaces/proxy-9317/services/http:proxy-service-rs76d:portname1/proxy/: foo (200; 10.328967ms) Aug 14 14:53:20.008: INFO: (19) /api/v1/namespaces/proxy-9317/services/proxy-service-rs76d:portname2/proxy/: bar (200; 10.698429ms) Aug 14 14:53:20.008: INFO: (19) /api/v1/namespaces/proxy-9317/pods/http:proxy-service-rs76d-dqw8c:160/proxy/: foo (200; 10.678711ms) Aug 14 14:53:20.008: INFO: (19) /api/v1/namespaces/proxy-9317/pods/https:proxy-service-rs76d-dqw8c:443/proxy/: test<... (200; 11.099472ms) Aug 14 14:53:20.008: INFO: (19) /api/v1/namespaces/proxy-9317/services/https:proxy-service-rs76d:tlsportname1/proxy/: tls baz (200; 11.123048ms) STEP: deleting ReplicationController proxy-service-rs76d in namespace proxy-9317, will wait for the garbage collector to delete the pods Aug 14 14:53:20.174: INFO: Deleting ReplicationController proxy-service-rs76d took: 110.735175ms Aug 14 14:53:20.475: INFO: Terminating ReplicationController proxy-service-rs76d pods took: 300.753989ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:53:25.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9317" for this suite. • [SLOW TEST:22.123 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":99,"skipped":1700,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:53:25.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 14:53:25.771: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14" in namespace "projected-3697" to be "Succeeded or Failed" Aug 14 14:53:25.968: INFO: Pod "downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14": Phase="Pending", Reason="", readiness=false. Elapsed: 197.555404ms Aug 14 14:53:27.976: INFO: Pod "downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205087214s Aug 14 14:53:30.801: INFO: Pod "downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14": Phase="Pending", Reason="", readiness=false. Elapsed: 5.029834997s Aug 14 14:53:33.407: INFO: Pod "downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14": Phase="Pending", Reason="", readiness=false. Elapsed: 7.635868856s Aug 14 14:53:35.507: INFO: Pod "downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14": Phase="Pending", Reason="", readiness=false. Elapsed: 9.736555264s Aug 14 14:53:37.516: INFO: Pod "downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14": Phase="Pending", Reason="", readiness=false. Elapsed: 11.744785854s Aug 14 14:53:40.094: INFO: Pod "downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14": Phase="Pending", Reason="", readiness=false. Elapsed: 14.323053275s Aug 14 14:53:42.287: INFO: Pod "downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14": Phase="Pending", Reason="", readiness=false. Elapsed: 16.516147221s Aug 14 14:53:44.557: INFO: Pod "downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.786458296s STEP: Saw pod success Aug 14 14:53:44.558: INFO: Pod "downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14" satisfied condition "Succeeded or Failed" Aug 14 14:53:44.617: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14 container client-container: STEP: delete the pod Aug 14 14:53:44.779: INFO: Waiting for pod downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14 to disappear Aug 14 14:53:44.799: INFO: Pod downwardapi-volume-8f2e3ed4-7de8-4e24-bdb7-e58a101a0e14 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:53:44.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3697" for this suite. • [SLOW TEST:19.430 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1758,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:53:44.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:53:57.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4790" for this suite. • [SLOW TEST:13.037 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":101,"skipped":1763,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:53:57.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-6fe23e52-01dc-4367-b6e4-1d3e56abe0ae in namespace container-probe-6969 Aug 14 14:54:05.718: INFO: Started pod test-webserver-6fe23e52-01dc-4367-b6e4-1d3e56abe0ae in namespace container-probe-6969 STEP: checking the pod's current state and verifying that restartCount is present Aug 14 14:54:05.725: INFO: Initial restart count of pod test-webserver-6fe23e52-01dc-4367-b6e4-1d3e56abe0ae is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:58:06.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6969" for this suite. • [SLOW TEST:249.166 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1779,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:58:07.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-679.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-679.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 14 14:58:14.544: INFO: DNS probes using dns-679/dns-test-c7dc2053-8649-45a1-a440-a081a3706e71 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:58:14.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-679" for this suite. • [SLOW TEST:7.690 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":103,"skipped":1814,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:58:14.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5332 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5332 STEP: Creating statefulset with conflicting port in namespace statefulset-5332 STEP: Waiting until pod test-pod will start running in namespace statefulset-5332 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5332 Aug 14 14:58:21.342: INFO: Observed stateful pod in namespace: statefulset-5332, name: ss-0, uid: bc720396-f769-4a9b-bff2-6c05152d94e3, status phase: Pending. Waiting for statefulset controller to delete. Aug 14 14:58:21.497: INFO: Observed stateful pod in namespace: statefulset-5332, name: ss-0, uid: bc720396-f769-4a9b-bff2-6c05152d94e3, status phase: Failed. Waiting for statefulset controller to delete. Aug 14 14:58:21.510: INFO: Observed stateful pod in namespace: statefulset-5332, name: ss-0, uid: bc720396-f769-4a9b-bff2-6c05152d94e3, status phase: Failed. Waiting for statefulset controller to delete. Aug 14 14:58:21.553: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5332 STEP: Removing pod with conflicting port in namespace statefulset-5332 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5332 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 14 14:58:30.259: INFO: Deleting all statefulset in ns statefulset-5332 Aug 14 14:58:30.264: INFO: Scaling statefulset ss to 0 Aug 14 14:58:50.290: INFO: Waiting for statefulset status.replicas updated to 0 Aug 14 14:58:50.294: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:58:50.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5332" for this suite. • [SLOW TEST:35.611 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":104,"skipped":1828,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:58:50.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-9f2bc49e-c1e8-4a2a-ae25-42d9405e0abe STEP: Creating a pod to test consume secrets Aug 14 14:58:50.449: INFO: Waiting up to 5m0s for pod "pod-secrets-cdff6df8-e62a-4f2b-a6bb-a4a300ccc967" in namespace "secrets-1235" to be "Succeeded or Failed" Aug 14 14:58:50.470: INFO: Pod "pod-secrets-cdff6df8-e62a-4f2b-a6bb-a4a300ccc967": Phase="Pending", Reason="", readiness=false. Elapsed: 20.887636ms Aug 14 14:58:52.726: INFO: Pod "pod-secrets-cdff6df8-e62a-4f2b-a6bb-a4a300ccc967": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277108044s Aug 14 14:58:54.804: INFO: Pod "pod-secrets-cdff6df8-e62a-4f2b-a6bb-a4a300ccc967": Phase="Pending", Reason="", readiness=false. Elapsed: 4.354642765s Aug 14 14:58:56.818: INFO: Pod "pod-secrets-cdff6df8-e62a-4f2b-a6bb-a4a300ccc967": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.368828342s STEP: Saw pod success Aug 14 14:58:56.818: INFO: Pod "pod-secrets-cdff6df8-e62a-4f2b-a6bb-a4a300ccc967" satisfied condition "Succeeded or Failed" Aug 14 14:58:56.822: INFO: Trying to get logs from node kali-worker pod pod-secrets-cdff6df8-e62a-4f2b-a6bb-a4a300ccc967 container secret-volume-test: STEP: delete the pod Aug 14 14:58:56.961: INFO: Waiting for pod pod-secrets-cdff6df8-e62a-4f2b-a6bb-a4a300ccc967 to disappear Aug 14 14:58:56.986: INFO: Pod pod-secrets-cdff6df8-e62a-4f2b-a6bb-a4a300ccc967 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 14:58:56.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1235" for this suite. • [SLOW TEST:6.675 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1890,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 14:58:57.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 14 14:58:58.538: INFO: Pod name wrapped-volume-race-c06149c9-2a81-4875-8d1e-876b4e6c8e17: Found 0 pods out of 5 Aug 14 14:59:04.106: INFO: Pod name wrapped-volume-race-c06149c9-2a81-4875-8d1e-876b4e6c8e17: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c06149c9-2a81-4875-8d1e-876b4e6c8e17 in namespace emptydir-wrapper-3860, will wait for the garbage collector to delete the pods Aug 14 14:59:21.169: INFO: Deleting ReplicationController wrapped-volume-race-c06149c9-2a81-4875-8d1e-876b4e6c8e17 took: 117.141211ms Aug 14 14:59:21.570: INFO: Terminating ReplicationController wrapped-volume-race-c06149c9-2a81-4875-8d1e-876b4e6c8e17 pods took: 400.666772ms STEP: Creating RC which spawns configmap-volume pods Aug 14 14:59:34.317: INFO: Pod name wrapped-volume-race-ede3feb0-5b0a-4ef2-a400-5a8b0792d6a2: Found 0 pods out of 5 Aug 14 14:59:39.340: INFO: Pod name wrapped-volume-race-ede3feb0-5b0a-4ef2-a400-5a8b0792d6a2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ede3feb0-5b0a-4ef2-a400-5a8b0792d6a2 in namespace emptydir-wrapper-3860, will wait for the garbage collector to delete the pods Aug 14 14:59:58.322: INFO: Deleting ReplicationController wrapped-volume-race-ede3feb0-5b0a-4ef2-a400-5a8b0792d6a2 took: 346.953985ms Aug 14 14:59:58.923: INFO: Terminating ReplicationController wrapped-volume-race-ede3feb0-5b0a-4ef2-a400-5a8b0792d6a2 pods took: 600.924408ms STEP: Creating RC which spawns configmap-volume pods Aug 14 15:00:15.811: INFO: Pod name wrapped-volume-race-ac9de4d0-4863-4de6-ac44-4f67662983cd: Found 0 pods out of 5 Aug 14 15:00:20.957: INFO: Pod name wrapped-volume-race-ac9de4d0-4863-4de6-ac44-4f67662983cd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ac9de4d0-4863-4de6-ac44-4f67662983cd in namespace emptydir-wrapper-3860, will wait for the garbage collector to delete the pods Aug 14 15:00:37.339: INFO: Deleting ReplicationController wrapped-volume-race-ac9de4d0-4863-4de6-ac44-4f67662983cd took: 15.231983ms Aug 14 15:00:38.240: INFO: Terminating ReplicationController wrapped-volume-race-ac9de4d0-4863-4de6-ac44-4f67662983cd pods took: 900.697877ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:01:17.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3860" for this suite. • [SLOW TEST:140.716 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":106,"skipped":1900,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:01:17.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-zch6 STEP: Creating a pod to test atomic-volume-subpath Aug 14 15:01:18.060: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-zch6" in namespace "subpath-6866" to be "Succeeded or Failed" Aug 14 15:01:18.078: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.298879ms Aug 14 15:01:20.212: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152245782s Aug 14 15:01:22.243: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183020568s Aug 14 15:01:24.280: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Running", Reason="", readiness=true. Elapsed: 6.219600506s Aug 14 15:01:26.369: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Running", Reason="", readiness=true. Elapsed: 8.308904405s Aug 14 15:01:28.419: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Running", Reason="", readiness=true. Elapsed: 10.35904559s Aug 14 15:01:30.435: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Running", Reason="", readiness=true. Elapsed: 12.374413637s Aug 14 15:01:32.465: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Running", Reason="", readiness=true. Elapsed: 14.404788571s Aug 14 15:01:34.472: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Running", Reason="", readiness=true. Elapsed: 16.411683524s Aug 14 15:01:36.519: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Running", Reason="", readiness=true. Elapsed: 18.458452374s Aug 14 15:01:38.530: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Running", Reason="", readiness=true. Elapsed: 20.470071539s Aug 14 15:01:40.540: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Running", Reason="", readiness=true. Elapsed: 22.479390124s Aug 14 15:01:42.547: INFO: Pod "pod-subpath-test-projected-zch6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.486619837s STEP: Saw pod success Aug 14 15:01:42.547: INFO: Pod "pod-subpath-test-projected-zch6" satisfied condition "Succeeded or Failed" Aug 14 15:01:42.553: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-zch6 container test-container-subpath-projected-zch6: STEP: delete the pod Aug 14 15:01:42.619: INFO: Waiting for pod pod-subpath-test-projected-zch6 to disappear Aug 14 15:01:42.865: INFO: Pod pod-subpath-test-projected-zch6 no longer exists STEP: Deleting pod pod-subpath-test-projected-zch6 Aug 14 15:01:42.866: INFO: Deleting pod "pod-subpath-test-projected-zch6" in namespace "subpath-6866" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:01:42.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6866" for this suite. • [SLOW TEST:25.161 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":107,"skipped":1913,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:01:42.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-ab61c763-e889-4673-8c26-df3d49e071b4 in namespace container-probe-9391 Aug 14 15:01:47.321: INFO: Started pod liveness-ab61c763-e889-4673-8c26-df3d49e071b4 in namespace container-probe-9391 STEP: checking the pod's current state and verifying that restartCount is present Aug 14 15:01:47.360: INFO: Initial restart count of pod liveness-ab61c763-e889-4673-8c26-df3d49e071b4 is 0 Aug 14 15:02:05.637: INFO: Restart count of pod container-probe-9391/liveness-ab61c763-e889-4673-8c26-df3d49e071b4 is now 1 (18.277287761s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:02:05.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9391" for this suite. • [SLOW TEST:22.804 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1921,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:02:05.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:02:23.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9992" for this suite. • [SLOW TEST:17.355 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":109,"skipped":1922,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:02:23.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:02:31.346: INFO: Waiting up to 5m0s for pod "client-envvars-4a14d15e-a34b-40dc-bfec-45e2ff5716ee" in namespace "pods-5004" to be "Succeeded or Failed" Aug 14 15:02:31.370: INFO: Pod "client-envvars-4a14d15e-a34b-40dc-bfec-45e2ff5716ee": Phase="Pending", Reason="", readiness=false. Elapsed: 23.948333ms Aug 14 15:02:33.375: INFO: Pod "client-envvars-4a14d15e-a34b-40dc-bfec-45e2ff5716ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029150533s Aug 14 15:02:35.392: INFO: Pod "client-envvars-4a14d15e-a34b-40dc-bfec-45e2ff5716ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046820243s Aug 14 15:02:37.399: INFO: Pod "client-envvars-4a14d15e-a34b-40dc-bfec-45e2ff5716ee": Phase="Running", Reason="", readiness=true. Elapsed: 6.052857832s Aug 14 15:02:39.404: INFO: Pod "client-envvars-4a14d15e-a34b-40dc-bfec-45e2ff5716ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057940595s STEP: Saw pod success Aug 14 15:02:39.404: INFO: Pod "client-envvars-4a14d15e-a34b-40dc-bfec-45e2ff5716ee" satisfied condition "Succeeded or Failed" Aug 14 15:02:39.453: INFO: Trying to get logs from node kali-worker pod client-envvars-4a14d15e-a34b-40dc-bfec-45e2ff5716ee container env3cont: STEP: delete the pod Aug 14 15:02:39.683: INFO: Waiting for pod client-envvars-4a14d15e-a34b-40dc-bfec-45e2ff5716ee to disappear Aug 14 15:02:39.718: INFO: Pod client-envvars-4a14d15e-a34b-40dc-bfec-45e2ff5716ee no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:02:39.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5004" for this suite. • [SLOW TEST:16.827 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1939,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:02:39.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 15:02:44.106: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 15:02:46.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014164, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014164, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014164, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014164, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:02:48.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014164, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014164, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014164, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014164, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 15:02:51.168: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 14 15:02:55.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config attach --namespace=webhook-1383 to-be-attached-pod -i -c=container1' Aug 14 15:03:06.872: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:03:07.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1383" for this suite. STEP: Destroying namespace "webhook-1383-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:29.830 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":111,"skipped":1955,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:03:09.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-6812/secret-test-c78ee2ba-90f9-4a4f-9457-846d8858ea5c STEP: Creating a pod to test consume secrets Aug 14 15:03:10.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a3288f8-6ec4-496e-9af0-3f4728663a13" in namespace "secrets-6812" to be "Succeeded or Failed" Aug 14 15:03:11.028: INFO: Pod "pod-configmaps-0a3288f8-6ec4-496e-9af0-3f4728663a13": Phase="Pending", Reason="", readiness=false. Elapsed: 417.818474ms Aug 14 15:03:13.083: INFO: Pod "pod-configmaps-0a3288f8-6ec4-496e-9af0-3f4728663a13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472790381s Aug 14 15:03:15.106: INFO: Pod "pod-configmaps-0a3288f8-6ec4-496e-9af0-3f4728663a13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495562425s Aug 14 15:03:17.238: INFO: Pod "pod-configmaps-0a3288f8-6ec4-496e-9af0-3f4728663a13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.627798975s Aug 14 15:03:19.460: INFO: Pod "pod-configmaps-0a3288f8-6ec4-496e-9af0-3f4728663a13": Phase="Pending", Reason="", readiness=false. Elapsed: 8.849458057s Aug 14 15:03:21.466: INFO: Pod "pod-configmaps-0a3288f8-6ec4-496e-9af0-3f4728663a13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.855791959s STEP: Saw pod success Aug 14 15:03:21.466: INFO: Pod "pod-configmaps-0a3288f8-6ec4-496e-9af0-3f4728663a13" satisfied condition "Succeeded or Failed" Aug 14 15:03:21.470: INFO: Trying to get logs from node kali-worker pod pod-configmaps-0a3288f8-6ec4-496e-9af0-3f4728663a13 container env-test: STEP: delete the pod Aug 14 15:03:22.257: INFO: Waiting for pod pod-configmaps-0a3288f8-6ec4-496e-9af0-3f4728663a13 to disappear Aug 14 15:03:22.278: INFO: Pod pod-configmaps-0a3288f8-6ec4-496e-9af0-3f4728663a13 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:03:22.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6812" for this suite. • [SLOW TEST:12.586 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1961,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:03:22.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Aug 14 15:03:29.375: INFO: Successfully updated pod "annotationupdate2f4a06d9-cd05-4bf1-ab56-712d828be6c5" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:03:31.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1079" for this suite. • [SLOW TEST:9.185 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1980,"failed":0} [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:03:31.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-48a18000-08cd-439d-8654-837e0f20f12a STEP: Creating configMap with name cm-test-opt-upd-736286ef-1be4-429d-af2f-6f2992c2d36d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-48a18000-08cd-439d-8654-837e0f20f12a STEP: Updating configmap cm-test-opt-upd-736286ef-1be4-429d-af2f-6f2992c2d36d STEP: Creating configMap with name cm-test-opt-create-c0121d1f-ea97-4ef6-bf7e-ccfb55510e3a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:05:07.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6668" for this suite. • [SLOW TEST:96.036 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1980,"failed":0} SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:05:07.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 14 15:05:14.347: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c05c4365-50ad-4c3f-832f-f802ab11d295" Aug 14 15:05:14.347: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c05c4365-50ad-4c3f-832f-f802ab11d295" in namespace "pods-1855" to be "terminated due to deadline exceeded" Aug 14 15:05:14.529: INFO: Pod "pod-update-activedeadlineseconds-c05c4365-50ad-4c3f-832f-f802ab11d295": Phase="Running", Reason="", readiness=true. Elapsed: 181.227394ms Aug 14 15:05:16.537: INFO: Pod "pod-update-activedeadlineseconds-c05c4365-50ad-4c3f-832f-f802ab11d295": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.188943993s Aug 14 15:05:16.537: INFO: Pod "pod-update-activedeadlineseconds-c05c4365-50ad-4c3f-832f-f802ab11d295" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:05:16.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1855" for this suite. • [SLOW TEST:9.039 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":1982,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:05:16.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Aug 14 15:05:23.430: INFO: Successfully updated pod "labelsupdate640256ea-526f-4332-827f-23fe374f867c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:05:25.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9822" for this suite. • [SLOW TEST:8.916 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":2034,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:05:25.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 15:05:32.135: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 15:05:34.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014332, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014332, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014332, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014331, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:05:36.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014332, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014332, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014332, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014331, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:05:38.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014332, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014332, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014332, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014331, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 15:05:42.196: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:05:42.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-430-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:05:43.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2494" for this suite. STEP: Destroying namespace "webhook-2494-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.302 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":117,"skipped":2035,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:05:43.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-3d57c564-ddb5-4898-b154-119e978bb7c9 STEP: Creating a pod to test consume configMaps Aug 14 15:05:44.554: INFO: Waiting up to 5m0s for pod "pod-configmaps-2b82b7fb-5ff3-4cbc-bc01-8efb3b4f8399" in namespace "configmap-5163" to be "Succeeded or Failed" Aug 14 15:05:44.754: INFO: Pod "pod-configmaps-2b82b7fb-5ff3-4cbc-bc01-8efb3b4f8399": Phase="Pending", Reason="", readiness=false. Elapsed: 199.397366ms Aug 14 15:05:46.821: INFO: Pod "pod-configmaps-2b82b7fb-5ff3-4cbc-bc01-8efb3b4f8399": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26702456s Aug 14 15:05:48.828: INFO: Pod "pod-configmaps-2b82b7fb-5ff3-4cbc-bc01-8efb3b4f8399": Phase="Running", Reason="", readiness=true. Elapsed: 4.273456108s Aug 14 15:05:50.833: INFO: Pod "pod-configmaps-2b82b7fb-5ff3-4cbc-bc01-8efb3b4f8399": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.278721858s STEP: Saw pod success Aug 14 15:05:50.833: INFO: Pod "pod-configmaps-2b82b7fb-5ff3-4cbc-bc01-8efb3b4f8399" satisfied condition "Succeeded or Failed" Aug 14 15:05:50.838: INFO: Trying to get logs from node kali-worker pod pod-configmaps-2b82b7fb-5ff3-4cbc-bc01-8efb3b4f8399 container configmap-volume-test: STEP: delete the pod Aug 14 15:05:50.858: INFO: Waiting for pod pod-configmaps-2b82b7fb-5ff3-4cbc-bc01-8efb3b4f8399 to disappear Aug 14 15:05:50.862: INFO: Pod pod-configmaps-2b82b7fb-5ff3-4cbc-bc01-8efb3b4f8399 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:05:50.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5163" for this suite. • [SLOW TEST:7.093 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":2039,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:05:50.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 15:05:51.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87d25c77-1b10-4897-97df-d50e1e51dd63" in namespace "downward-api-1198" to be "Succeeded or Failed" Aug 14 15:05:51.117: INFO: Pod "downwardapi-volume-87d25c77-1b10-4897-97df-d50e1e51dd63": Phase="Pending", Reason="", readiness=false. Elapsed: 30.716086ms Aug 14 15:05:53.187: INFO: Pod "downwardapi-volume-87d25c77-1b10-4897-97df-d50e1e51dd63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100461291s Aug 14 15:05:55.205: INFO: Pod "downwardapi-volume-87d25c77-1b10-4897-97df-d50e1e51dd63": Phase="Running", Reason="", readiness=true. Elapsed: 4.11788381s Aug 14 15:05:57.211: INFO: Pod "downwardapi-volume-87d25c77-1b10-4897-97df-d50e1e51dd63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124330156s STEP: Saw pod success Aug 14 15:05:57.211: INFO: Pod "downwardapi-volume-87d25c77-1b10-4897-97df-d50e1e51dd63" satisfied condition "Succeeded or Failed" Aug 14 15:05:57.215: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-87d25c77-1b10-4897-97df-d50e1e51dd63 container client-container: STEP: delete the pod Aug 14 15:05:57.295: INFO: Waiting for pod downwardapi-volume-87d25c77-1b10-4897-97df-d50e1e51dd63 to disappear Aug 14 15:05:57.316: INFO: Pod downwardapi-volume-87d25c77-1b10-4897-97df-d50e1e51dd63 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:05:57.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1198" for this suite. • [SLOW TEST:6.454 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":2078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:05:57.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-8f6q STEP: Creating a pod to test atomic-volume-subpath Aug 14 15:05:57.620: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8f6q" in namespace "subpath-1235" to be "Succeeded or Failed" Aug 14 15:05:57.669: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Pending", Reason="", readiness=false. Elapsed: 49.137404ms Aug 14 15:05:59.707: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087295533s Aug 14 15:06:01.715: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094717331s Aug 14 15:06:03.723: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Running", Reason="", readiness=true. Elapsed: 6.102578387s Aug 14 15:06:05.729: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Running", Reason="", readiness=true. Elapsed: 8.108753045s Aug 14 15:06:07.983: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Running", Reason="", readiness=true. Elapsed: 10.363355675s Aug 14 15:06:10.036: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Running", Reason="", readiness=true. Elapsed: 12.416504721s Aug 14 15:06:12.205: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Running", Reason="", readiness=true. Elapsed: 14.585008465s Aug 14 15:06:14.222: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Running", Reason="", readiness=true. Elapsed: 16.601899774s Aug 14 15:06:16.294: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Running", Reason="", readiness=true. Elapsed: 18.674043869s Aug 14 15:06:18.300: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Running", Reason="", readiness=true. Elapsed: 20.679633381s Aug 14 15:06:20.305: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Running", Reason="", readiness=true. Elapsed: 22.684960396s Aug 14 15:06:22.338: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Running", Reason="", readiness=true. Elapsed: 24.718359252s Aug 14 15:06:24.366: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Running", Reason="", readiness=true. Elapsed: 26.745715609s Aug 14 15:06:26.384: INFO: Pod "pod-subpath-test-configmap-8f6q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.764040013s STEP: Saw pod success Aug 14 15:06:26.384: INFO: Pod "pod-subpath-test-configmap-8f6q" satisfied condition "Succeeded or Failed" Aug 14 15:06:26.434: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-8f6q container test-container-subpath-configmap-8f6q: STEP: delete the pod Aug 14 15:06:26.966: INFO: Waiting for pod pod-subpath-test-configmap-8f6q to disappear Aug 14 15:06:26.970: INFO: Pod pod-subpath-test-configmap-8f6q no longer exists STEP: Deleting pod pod-subpath-test-configmap-8f6q Aug 14 15:06:26.970: INFO: Deleting pod "pod-subpath-test-configmap-8f6q" in namespace "subpath-1235" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:06:26.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1235" for this suite. • [SLOW TEST:29.656 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":120,"skipped":2104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:06:26.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 14 15:06:27.266: INFO: Waiting up to 5m0s for pod "pod-a8256114-8649-4046-b6b8-3a54a8025b5c" in namespace "emptydir-4132" to be "Succeeded or Failed" Aug 14 15:06:27.314: INFO: Pod "pod-a8256114-8649-4046-b6b8-3a54a8025b5c": Phase="Pending", Reason="", readiness=false. Elapsed: 47.898497ms Aug 14 15:06:29.322: INFO: Pod "pod-a8256114-8649-4046-b6b8-3a54a8025b5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055826164s Aug 14 15:06:31.331: INFO: Pod "pod-a8256114-8649-4046-b6b8-3a54a8025b5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064803722s Aug 14 15:06:33.338: INFO: Pod "pod-a8256114-8649-4046-b6b8-3a54a8025b5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071411927s STEP: Saw pod success Aug 14 15:06:33.338: INFO: Pod "pod-a8256114-8649-4046-b6b8-3a54a8025b5c" satisfied condition "Succeeded or Failed" Aug 14 15:06:33.343: INFO: Trying to get logs from node kali-worker pod pod-a8256114-8649-4046-b6b8-3a54a8025b5c container test-container: STEP: delete the pod Aug 14 15:06:33.535: INFO: Waiting for pod pod-a8256114-8649-4046-b6b8-3a54a8025b5c to disappear Aug 14 15:06:33.569: INFO: Pod pod-a8256114-8649-4046-b6b8-3a54a8025b5c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:06:33.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4132" for this suite. • [SLOW TEST:6.590 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:06:33.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Aug 14 15:06:33.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config cluster-info' Aug 14 15:06:34.991: INFO: stderr: "" Aug 14 15:06:34.991: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:06:34.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4122" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":122,"skipped":2169,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:06:35.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 14 15:06:35.116: INFO: Waiting up to 5m0s for pod "pod-c5e99255-6416-4adc-9dfd-aaf9be5f6ef0" in namespace "emptydir-6975" to be "Succeeded or Failed" Aug 14 15:06:35.127: INFO: Pod "pod-c5e99255-6416-4adc-9dfd-aaf9be5f6ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.773572ms Aug 14 15:06:37.390: INFO: Pod "pod-c5e99255-6416-4adc-9dfd-aaf9be5f6ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.274240512s Aug 14 15:06:39.402: INFO: Pod "pod-c5e99255-6416-4adc-9dfd-aaf9be5f6ef0": Phase="Running", Reason="", readiness=true. Elapsed: 4.286208642s Aug 14 15:06:41.408: INFO: Pod "pod-c5e99255-6416-4adc-9dfd-aaf9be5f6ef0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.292027424s STEP: Saw pod success Aug 14 15:06:41.408: INFO: Pod "pod-c5e99255-6416-4adc-9dfd-aaf9be5f6ef0" satisfied condition "Succeeded or Failed" Aug 14 15:06:41.413: INFO: Trying to get logs from node kali-worker pod pod-c5e99255-6416-4adc-9dfd-aaf9be5f6ef0 container test-container: STEP: delete the pod Aug 14 15:06:41.449: INFO: Waiting for pod pod-c5e99255-6416-4adc-9dfd-aaf9be5f6ef0 to disappear Aug 14 15:06:41.484: INFO: Pod pod-c5e99255-6416-4adc-9dfd-aaf9be5f6ef0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:06:41.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6975" for this suite. • [SLOW TEST:6.493 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2190,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:06:41.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:06:59.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9211" for this suite. • [SLOW TEST:18.163 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":124,"skipped":2200,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:06:59.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Aug 14 15:06:59.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-851' Aug 14 15:07:01.482: INFO: stderr: "" Aug 14 15:07:01.483: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 14 15:07:01.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-851' Aug 14 15:07:02.785: INFO: stderr: "" Aug 14 15:07:02.785: INFO: stdout: "update-demo-nautilus-7qvfb update-demo-nautilus-rtpqn " Aug 14 15:07:02.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qvfb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-851' Aug 14 15:07:04.072: INFO: stderr: "" Aug 14 15:07:04.072: INFO: stdout: "" Aug 14 15:07:04.073: INFO: update-demo-nautilus-7qvfb is created but not running Aug 14 15:07:09.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-851' Aug 14 15:07:10.395: INFO: stderr: "" Aug 14 15:07:10.395: INFO: stdout: "update-demo-nautilus-7qvfb update-demo-nautilus-rtpqn " Aug 14 15:07:10.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qvfb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-851' Aug 14 15:07:11.900: INFO: stderr: "" Aug 14 15:07:11.900: INFO: stdout: "true" Aug 14 15:07:11.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qvfb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-851' Aug 14 15:07:13.139: INFO: stderr: "" Aug 14 15:07:13.139: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 14 15:07:13.140: INFO: validating pod update-demo-nautilus-7qvfb Aug 14 15:07:13.494: INFO: got data: { "image": "nautilus.jpg" } Aug 14 15:07:13.495: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 14 15:07:13.496: INFO: update-demo-nautilus-7qvfb is verified up and running Aug 14 15:07:13.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rtpqn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-851' Aug 14 15:07:14.747: INFO: stderr: "" Aug 14 15:07:14.748: INFO: stdout: "true" Aug 14 15:07:14.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rtpqn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-851' Aug 14 15:07:16.472: INFO: stderr: "" Aug 14 15:07:16.472: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 14 15:07:16.472: INFO: validating pod update-demo-nautilus-rtpqn Aug 14 15:07:16.901: INFO: got data: { "image": "nautilus.jpg" } Aug 14 15:07:16.901: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 14 15:07:16.901: INFO: update-demo-nautilus-rtpqn is verified up and running STEP: using delete to clean up resources Aug 14 15:07:16.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-851' Aug 14 15:07:18.134: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 15:07:18.134: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 14 15:07:18.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-851' Aug 14 15:07:19.925: INFO: stderr: "No resources found in kubectl-851 namespace.\n" Aug 14 15:07:19.925: INFO: stdout: "" Aug 14 15:07:19.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-851 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 14 15:07:21.203: INFO: stderr: "" Aug 14 15:07:21.203: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:07:21.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-851" for this suite. • [SLOW TEST:21.542 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":125,"skipped":2203,"failed":0} [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:07:21.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0814 15:07:32.312603 10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 14 15:07:32.312: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:07:32.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4739" for this suite. • [SLOW TEST:11.106 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":126,"skipped":2203,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:07:32.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-7548 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 14 15:07:32.601: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 14 15:07:32.840: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 14 15:07:35.013: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 14 15:07:37.100: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 14 15:07:39.541: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 14 15:07:40.985: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 14 15:07:42.846: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 15:07:44.846: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 15:07:46.847: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 15:07:48.847: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 15:07:50.847: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 15:07:52.848: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 15:07:54.846: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 14 15:07:56.846: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 14 15:07:56.856: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 14 15:08:00.938: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.70:8080/dial?request=hostname&protocol=udp&host=10.244.2.69&port=8081&tries=1'] Namespace:pod-network-test-7548 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 15:08:00.939: INFO: >>> kubeConfig: /root/.kube/config I0814 15:08:01.006537 10 log.go:172] (0x40018a0370) (0x40015cf720) Create stream I0814 15:08:01.006777 10 log.go:172] (0x40018a0370) (0x40015cf720) Stream added, broadcasting: 1 I0814 15:08:01.011498 10 log.go:172] (0x40018a0370) Reply frame received for 1 I0814 15:08:01.011665 10 log.go:172] (0x40018a0370) (0x4000fb9860) Create stream I0814 15:08:01.011749 10 log.go:172] (0x40018a0370) (0x4000fb9860) Stream added, broadcasting: 3 I0814 15:08:01.013553 10 log.go:172] (0x40018a0370) Reply frame received for 3 I0814 15:08:01.013767 10 log.go:172] (0x40018a0370) (0x40015cf7c0) Create stream I0814 15:08:01.013895 10 log.go:172] (0x40018a0370) (0x40015cf7c0) Stream added, broadcasting: 5 I0814 15:08:01.015790 10 log.go:172] (0x40018a0370) Reply frame received for 5 I0814 15:08:01.074324 10 log.go:172] (0x40018a0370) Data frame received for 3 I0814 15:08:01.074541 10 log.go:172] (0x40018a0370) Data frame received for 5 I0814 15:08:01.074655 10 log.go:172] (0x4000fb9860) (3) Data frame handling I0814 15:08:01.074819 10 log.go:172] (0x40015cf7c0) (5) Data frame handling I0814 15:08:01.075010 10 log.go:172] (0x4000fb9860) (3) Data frame sent I0814 15:08:01.075139 10 log.go:172] (0x40018a0370) Data frame received for 3 I0814 15:08:01.075230 10 log.go:172] (0x4000fb9860) (3) Data frame handling I0814 15:08:01.076666 10 log.go:172] (0x40018a0370) Data frame received for 1 I0814 15:08:01.076819 10 log.go:172] (0x40015cf720) (1) Data frame handling I0814 15:08:01.076896 10 log.go:172] (0x40015cf720) (1) Data frame sent I0814 15:08:01.076973 10 log.go:172] (0x40018a0370) (0x40015cf720) Stream removed, broadcasting: 1 I0814 15:08:01.077066 10 log.go:172] (0x40018a0370) Go away received I0814 15:08:01.077484 10 log.go:172] (0x40018a0370) (0x40015cf720) Stream removed, broadcasting: 1 I0814 15:08:01.077612 10 log.go:172] (0x40018a0370) (0x4000fb9860) Stream removed, broadcasting: 3 I0814 15:08:01.077730 10 log.go:172] (0x40018a0370) (0x40015cf7c0) Stream removed, broadcasting: 5 Aug 14 15:08:01.077: INFO: Waiting for responses: map[] Aug 14 15:08:01.083: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.70:8080/dial?request=hostname&protocol=udp&host=10.244.1.75&port=8081&tries=1'] Namespace:pod-network-test-7548 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 15:08:01.083: INFO: >>> kubeConfig: /root/.kube/config I0814 15:08:01.193318 10 log.go:172] (0x40029d4790) (0x40014712c0) Create stream I0814 15:08:01.193489 10 log.go:172] (0x40029d4790) (0x40014712c0) Stream added, broadcasting: 1 I0814 15:08:01.197506 10 log.go:172] (0x40029d4790) Reply frame received for 1 I0814 15:08:01.197664 10 log.go:172] (0x40029d4790) (0x40015cf860) Create stream I0814 15:08:01.197762 10 log.go:172] (0x40029d4790) (0x40015cf860) Stream added, broadcasting: 3 I0814 15:08:01.199086 10 log.go:172] (0x40029d4790) Reply frame received for 3 I0814 15:08:01.199235 10 log.go:172] (0x40029d4790) (0x4001471400) Create stream I0814 15:08:01.199313 10 log.go:172] (0x40029d4790) (0x4001471400) Stream added, broadcasting: 5 I0814 15:08:01.200587 10 log.go:172] (0x40029d4790) Reply frame received for 5 I0814 15:08:01.271125 10 log.go:172] (0x40029d4790) Data frame received for 3 I0814 15:08:01.271334 10 log.go:172] (0x40015cf860) (3) Data frame handling I0814 15:08:01.271541 10 log.go:172] (0x40015cf860) (3) Data frame sent I0814 15:08:01.271700 10 log.go:172] (0x40029d4790) Data frame received for 3 I0814 15:08:01.271846 10 log.go:172] (0x40015cf860) (3) Data frame handling I0814 15:08:01.271984 10 log.go:172] (0x40029d4790) Data frame received for 5 I0814 15:08:01.272121 10 log.go:172] (0x4001471400) (5) Data frame handling I0814 15:08:01.273349 10 log.go:172] (0x40029d4790) Data frame received for 1 I0814 15:08:01.273445 10 log.go:172] (0x40014712c0) (1) Data frame handling I0814 15:08:01.273540 10 log.go:172] (0x40014712c0) (1) Data frame sent I0814 15:08:01.273646 10 log.go:172] (0x40029d4790) (0x40014712c0) Stream removed, broadcasting: 1 I0814 15:08:01.273773 10 log.go:172] (0x40029d4790) Go away received I0814 15:08:01.273960 10 log.go:172] (0x40029d4790) (0x40014712c0) Stream removed, broadcasting: 1 I0814 15:08:01.274065 10 log.go:172] (0x40029d4790) (0x40015cf860) Stream removed, broadcasting: 3 I0814 15:08:01.274176 10 log.go:172] (0x40029d4790) (0x4001471400) Stream removed, broadcasting: 5 Aug 14 15:08:01.274: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:08:01.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7548" for this suite. • [SLOW TEST:29.018 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2206,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:08:01.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 15:08:05.522: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 15:08:08.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:08:10.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:08:12.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014485, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 15:08:15.499: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:08:25.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7110" for this suite. STEP: Destroying namespace "webhook-7110-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.529 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":128,"skipped":2219,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:08:25.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-391703be-218d-43ac-a85a-9963be09d587 in namespace container-probe-1507 Aug 14 15:08:30.028: INFO: Started pod busybox-391703be-218d-43ac-a85a-9963be09d587 in namespace container-probe-1507 STEP: checking the pod's current state and verifying that restartCount is present Aug 14 15:08:30.034: INFO: Initial restart count of pod busybox-391703be-218d-43ac-a85a-9963be09d587 is 0 Aug 14 15:09:18.875: INFO: Restart count of pod container-probe-1507/busybox-391703be-218d-43ac-a85a-9963be09d587 is now 1 (48.841271471s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:09:18.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1507" for this suite. • [SLOW TEST:53.165 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2233,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:09:19.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 14 15:09:19.571: INFO: Waiting up to 5m0s for pod "pod-e3519ddc-1c26-435c-b61e-25e359a61dd3" in namespace "emptydir-6448" to be "Succeeded or Failed" Aug 14 15:09:19.588: INFO: Pod "pod-e3519ddc-1c26-435c-b61e-25e359a61dd3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.436114ms Aug 14 15:09:21.867: INFO: Pod "pod-e3519ddc-1c26-435c-b61e-25e359a61dd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295808566s Aug 14 15:09:23.874: INFO: Pod "pod-e3519ddc-1c26-435c-b61e-25e359a61dd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303370128s Aug 14 15:09:26.014: INFO: Pod "pod-e3519ddc-1c26-435c-b61e-25e359a61dd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.443643493s STEP: Saw pod success Aug 14 15:09:26.015: INFO: Pod "pod-e3519ddc-1c26-435c-b61e-25e359a61dd3" satisfied condition "Succeeded or Failed" Aug 14 15:09:26.020: INFO: Trying to get logs from node kali-worker pod pod-e3519ddc-1c26-435c-b61e-25e359a61dd3 container test-container: STEP: delete the pod Aug 14 15:09:26.160: INFO: Waiting for pod pod-e3519ddc-1c26-435c-b61e-25e359a61dd3 to disappear Aug 14 15:09:26.185: INFO: Pod pod-e3519ddc-1c26-435c-b61e-25e359a61dd3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:09:26.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6448" for this suite. • [SLOW TEST:7.155 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2250,"failed":0} [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:09:26.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:09:26.280: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 14 15:09:26.310: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 14 15:09:31.344: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 14 15:09:31.345: INFO: Creating deployment "test-rolling-update-deployment" Aug 14 15:09:31.359: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 14 15:09:31.557: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 14 15:09:33.573: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 14 15:09:33.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014571, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014571, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014571, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014571, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:09:35.835: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 14 15:09:35.919: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9294 /apis/apps/v1/namespaces/deployment-9294/deployments/test-rolling-update-deployment 59994324-dfcc-4eba-9e38-bb142c34b701 9552045 1 2020-08-14 15:09:31 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-08-14 15:09:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-14 15:09:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40008002d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-14 15:09:31 +0000 UTC,LastTransitionTime:2020-08-14 15:09:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-08-14 15:09:35 +0000 UTC,LastTransitionTime:2020-08-14 15:09:31 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 14 15:09:35.932: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7 deployment-9294 /apis/apps/v1/namespaces/deployment-9294/replicasets/test-rolling-update-deployment-59d5cb45c7 9744efbd-82d4-47b3-8b4b-40206e9c5c27 9552032 1 2020-08-14 15:09:31 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 59994324-dfcc-4eba-9e38-bb142c34b701 0x40008016f7 0x40008016f8}] [] [{kube-controller-manager Update apps/v1 2020-08-14 15:09:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 57 57 57 52 51 50 52 45 100 102 99 99 45 52 101 98 97 45 57 101 51 56 45 98 98 49 52 50 99 51 52 98 55 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40008017b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 14 15:09:35.932: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 14 15:09:35.933: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9294 /apis/apps/v1/namespaces/deployment-9294/replicasets/test-rolling-update-controller d2bd37f1-b45c-4b20-8a04-00c3a132901c 9552044 2 2020-08-14 15:09:26 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 59994324-dfcc-4eba-9e38-bb142c34b701 0x40008015df 0x40008015f0}] [] [{e2e.test Update apps/v1 2020-08-14 15:09:26 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-14 15:09:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 57 57 57 52 51 50 52 45 100 102 99 99 45 52 101 98 97 45 57 101 51 56 45 98 98 49 52 50 99 51 52 98 55 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4000801688 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 14 15:09:36.009: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-2npmf" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-2npmf test-rolling-update-deployment-59d5cb45c7- deployment-9294 /api/v1/namespaces/deployment-9294/pods/test-rolling-update-deployment-59d5cb45c7-2npmf 84fbfb95-f8e8-4dac-a649-866d5bbe331e 9552031 0 2020-08-14 15:09:31 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 9744efbd-82d4-47b3-8b4b-40206e9c5c27 0x40020e52e7 0x40020e52e8}] [] [{kube-controller-manager Update v1 2020-08-14 15:09:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 55 52 52 101 102 98 100 45 56 50 100 52 45 52 55 98 51 45 56 98 52 98 45 52 48 50 48 54 101 57 99 53 99 50 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 15:09:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-44cn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-44cn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-44cn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:09:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:09:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:09:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:09:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.75,StartTime:2020-08-14 15:09:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 15:09:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://9dbe86332a4517c243c9c7e54e0741bb7a6bd95afce2fbae70895c3e022ec958,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.75,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:09:36.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9294" for this suite. • [SLOW TEST:9.839 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":131,"skipped":2250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:09:36.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 15:09:38.316: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 15:09:40.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:09:42.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:09:44.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:09:46.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014578, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 15:09:49.685: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:09:49.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6286-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:09:52.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6466" for this suite. STEP: Destroying namespace "webhook-6466-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.068 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":132,"skipped":2275,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:09:53.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 14 15:09:54.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1483' Aug 14 15:09:56.039: INFO: stderr: "" Aug 14 15:09:56.040: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Aug 14 15:10:01.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1483 -o json' Aug 14 15:10:02.308: INFO: stderr: "" Aug 14 15:10:02.308: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-14T15:09:55Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-14T15:09:55Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.77\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-14T15:09:59Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1483\",\n \"resourceVersion\": \"9552219\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1483/pods/e2e-test-httpd-pod\",\n \"uid\": \"c7b178fa-c44e-4eaa-b10e-bf28028b2344\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-nkp44\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-nkp44\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-nkp44\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-14T15:09:56Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-14T15:09:59Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-14T15:09:59Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-14T15:09:55Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d21f6100124b5105e69aaaa71b0af44c827f39d725a53872d130193260dc0afa\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-14T15:09:59Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.77\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.77\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-14T15:09:56Z\"\n }\n}\n" STEP: replace the image in the pod Aug 14 15:10:02.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1483' Aug 14 15:10:04.035: INFO: stderr: "" Aug 14 15:10:04.035: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Aug 14 15:10:04.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1483' Aug 14 15:10:08.122: INFO: stderr: "" Aug 14 15:10:08.122: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:10:08.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1483" for this suite. • [SLOW TEST:15.320 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":133,"skipped":2283,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:10:08.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-015393b5-1d71-454a-9245-787fd145a32c STEP: Creating a pod to test consume secrets Aug 14 15:10:08.860: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ff4958a-de4f-4daa-aa11-9c902fe8d1e0" in namespace "projected-5599" to be "Succeeded or Failed" Aug 14 15:10:08.937: INFO: Pod "pod-projected-secrets-6ff4958a-de4f-4daa-aa11-9c902fe8d1e0": Phase="Pending", Reason="", readiness=false. Elapsed: 76.60191ms Aug 14 15:10:11.165: INFO: Pod "pod-projected-secrets-6ff4958a-de4f-4daa-aa11-9c902fe8d1e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303964686s Aug 14 15:10:13.171: INFO: Pod "pod-projected-secrets-6ff4958a-de4f-4daa-aa11-9c902fe8d1e0": Phase="Running", Reason="", readiness=true. Elapsed: 4.309925303s Aug 14 15:10:15.179: INFO: Pod "pod-projected-secrets-6ff4958a-de4f-4daa-aa11-9c902fe8d1e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.318653589s STEP: Saw pod success Aug 14 15:10:15.180: INFO: Pod "pod-projected-secrets-6ff4958a-de4f-4daa-aa11-9c902fe8d1e0" satisfied condition "Succeeded or Failed" Aug 14 15:10:15.185: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-6ff4958a-de4f-4daa-aa11-9c902fe8d1e0 container secret-volume-test: STEP: delete the pod Aug 14 15:10:15.271: INFO: Waiting for pod pod-projected-secrets-6ff4958a-de4f-4daa-aa11-9c902fe8d1e0 to disappear Aug 14 15:10:15.295: INFO: Pod pod-projected-secrets-6ff4958a-de4f-4daa-aa11-9c902fe8d1e0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:10:15.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5599" for this suite. • [SLOW TEST:6.877 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:10:15.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:11:27.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9127" for this suite. • [SLOW TEST:72.037 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:11:27.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-86jl STEP: Creating a pod to test atomic-volume-subpath Aug 14 15:11:28.260: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-86jl" in namespace "subpath-4473" to be "Succeeded or Failed" Aug 14 15:11:28.422: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Pending", Reason="", readiness=false. Elapsed: 161.794129ms Aug 14 15:11:30.430: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169838891s Aug 14 15:11:32.518: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258152147s Aug 14 15:11:34.525: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Running", Reason="", readiness=true. Elapsed: 6.26491336s Aug 14 15:11:36.530: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Running", Reason="", readiness=true. Elapsed: 8.269547169s Aug 14 15:11:38.553: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Running", Reason="", readiness=true. Elapsed: 10.292739467s Aug 14 15:11:40.558: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Running", Reason="", readiness=true. Elapsed: 12.297826367s Aug 14 15:11:42.571: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Running", Reason="", readiness=true. Elapsed: 14.310770982s Aug 14 15:11:44.595: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Running", Reason="", readiness=true. Elapsed: 16.335359857s Aug 14 15:11:46.601: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Running", Reason="", readiness=true. Elapsed: 18.341155634s Aug 14 15:11:48.607: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Running", Reason="", readiness=true. Elapsed: 20.346526314s Aug 14 15:11:50.613: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Running", Reason="", readiness=true. Elapsed: 22.352487918s Aug 14 15:11:52.617: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Running", Reason="", readiness=true. Elapsed: 24.357300033s Aug 14 15:11:54.741: INFO: Pod "pod-subpath-test-configmap-86jl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.480548804s STEP: Saw pod success Aug 14 15:11:54.741: INFO: Pod "pod-subpath-test-configmap-86jl" satisfied condition "Succeeded or Failed" Aug 14 15:11:54.745: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-86jl container test-container-subpath-configmap-86jl: STEP: delete the pod Aug 14 15:11:54.788: INFO: Waiting for pod pod-subpath-test-configmap-86jl to disappear Aug 14 15:11:54.799: INFO: Pod pod-subpath-test-configmap-86jl no longer exists STEP: Deleting pod pod-subpath-test-configmap-86jl Aug 14 15:11:54.799: INFO: Deleting pod "pod-subpath-test-configmap-86jl" in namespace "subpath-4473" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:11:54.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4473" for this suite. • [SLOW TEST:27.460 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":136,"skipped":2402,"failed":0} SSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:11:54.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 14 15:11:54.903: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Aug 14 15:11:58.490: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 14 15:12:01.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014718, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014718, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014718, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014718, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:12:03.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014718, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014718, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014718, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733014718, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:12:05.816: INFO: Waited 627.661335ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:12:06.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8478" for this suite. • [SLOW TEST:11.551 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":137,"skipped":2405,"failed":0} SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:12:06.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 14 15:12:07.018: INFO: Waiting up to 5m0s for pod "downward-api-cc5066d3-2e21-4ee2-a9b6-2059b3ccdf82" in namespace "downward-api-2200" to be "Succeeded or Failed" Aug 14 15:12:07.086: INFO: Pod "downward-api-cc5066d3-2e21-4ee2-a9b6-2059b3ccdf82": Phase="Pending", Reason="", readiness=false. Elapsed: 68.25902ms Aug 14 15:12:09.093: INFO: Pod "downward-api-cc5066d3-2e21-4ee2-a9b6-2059b3ccdf82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074820238s Aug 14 15:12:11.100: INFO: Pod "downward-api-cc5066d3-2e21-4ee2-a9b6-2059b3ccdf82": Phase="Running", Reason="", readiness=true. Elapsed: 4.081401859s Aug 14 15:12:13.104: INFO: Pod "downward-api-cc5066d3-2e21-4ee2-a9b6-2059b3ccdf82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086244422s STEP: Saw pod success Aug 14 15:12:13.105: INFO: Pod "downward-api-cc5066d3-2e21-4ee2-a9b6-2059b3ccdf82" satisfied condition "Succeeded or Failed" Aug 14 15:12:13.108: INFO: Trying to get logs from node kali-worker pod downward-api-cc5066d3-2e21-4ee2-a9b6-2059b3ccdf82 container dapi-container: STEP: delete the pod Aug 14 15:12:13.139: INFO: Waiting for pod downward-api-cc5066d3-2e21-4ee2-a9b6-2059b3ccdf82 to disappear Aug 14 15:12:13.159: INFO: Pod downward-api-cc5066d3-2e21-4ee2-a9b6-2059b3ccdf82 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:12:13.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2200" for this suite. • [SLOW TEST:6.803 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2407,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:12:13.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-37670663-0a55-4b76-a821-5519cfd73750 in namespace container-probe-1736 Aug 14 15:12:17.310: INFO: Started pod liveness-37670663-0a55-4b76-a821-5519cfd73750 in namespace container-probe-1736 STEP: checking the pod's current state and verifying that restartCount is present Aug 14 15:12:17.314: INFO: Initial restart count of pod liveness-37670663-0a55-4b76-a821-5519cfd73750 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:16:19.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1736" for this suite. • [SLOW TEST:246.766 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2432,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:16:19.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 14 15:16:27.360: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8474 PodName:pod-sharedvolume-00aca2f5-38e5-4d7e-92fa-a1b4d98199d6 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 14 15:16:27.360: INFO: >>> kubeConfig: /root/.kube/config I0814 15:16:27.418469 10 log.go:172] (0x4002432630) (0x4001a121e0) Create stream I0814 15:16:27.418618 10 log.go:172] (0x4002432630) (0x4001a121e0) Stream added, broadcasting: 1 I0814 15:16:27.421767 10 log.go:172] (0x4002432630) Reply frame received for 1 I0814 15:16:27.421927 10 log.go:172] (0x4002432630) (0x4001a12280) Create stream I0814 15:16:27.421991 10 log.go:172] (0x4002432630) (0x4001a12280) Stream added, broadcasting: 3 I0814 15:16:27.423477 10 log.go:172] (0x4002432630) Reply frame received for 3 I0814 15:16:27.423627 10 log.go:172] (0x4002432630) (0x4001168000) Create stream I0814 15:16:27.423693 10 log.go:172] (0x4002432630) (0x4001168000) Stream added, broadcasting: 5 I0814 15:16:27.425347 10 log.go:172] (0x4002432630) Reply frame received for 5 I0814 15:16:27.468720 10 log.go:172] (0x4002432630) Data frame received for 3 I0814 15:16:27.468937 10 log.go:172] (0x4001a12280) (3) Data frame handling I0814 15:16:27.469018 10 log.go:172] (0x4001a12280) (3) Data frame sent I0814 15:16:27.469092 10 log.go:172] (0x4002432630) Data frame received for 3 I0814 15:16:27.469146 10 log.go:172] (0x4001a12280) (3) Data frame handling I0814 15:16:27.469242 10 log.go:172] (0x4002432630) Data frame received for 5 I0814 15:16:27.469358 10 log.go:172] (0x4001168000) (5) Data frame handling I0814 15:16:27.470024 10 log.go:172] (0x4002432630) Data frame received for 1 I0814 15:16:27.470086 10 log.go:172] (0x4001a121e0) (1) Data frame handling I0814 15:16:27.470141 10 log.go:172] (0x4001a121e0) (1) Data frame sent I0814 15:16:27.470229 10 log.go:172] (0x4002432630) (0x4001a121e0) Stream removed, broadcasting: 1 I0814 15:16:27.470319 10 log.go:172] (0x4002432630) Go away received I0814 15:16:27.470547 10 log.go:172] (0x4002432630) (0x4001a121e0) Stream removed, broadcasting: 1 I0814 15:16:27.470622 10 log.go:172] (0x4002432630) (0x4001a12280) Stream removed, broadcasting: 3 I0814 15:16:27.470707 10 log.go:172] (0x4002432630) (0x4001168000) Stream removed, broadcasting: 5 Aug 14 15:16:27.470: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:16:27.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8474" for this suite. • [SLOW TEST:7.543 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":140,"skipped":2449,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:16:27.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:16:27.894: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Aug 14 15:16:48.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2934 create -f -' Aug 14 15:16:57.535: INFO: stderr: "" Aug 14 15:16:57.536: INFO: stdout: "e2e-test-crd-publish-openapi-7925-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 14 15:16:57.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2934 delete e2e-test-crd-publish-openapi-7925-crds test-foo' Aug 14 15:16:58.789: INFO: stderr: "" Aug 14 15:16:58.789: INFO: stdout: "e2e-test-crd-publish-openapi-7925-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Aug 14 15:16:58.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2934 apply -f -' Aug 14 15:17:00.418: INFO: stderr: "" Aug 14 15:17:00.418: INFO: stdout: "e2e-test-crd-publish-openapi-7925-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 14 15:17:00.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2934 delete e2e-test-crd-publish-openapi-7925-crds test-foo' Aug 14 15:17:01.667: INFO: stderr: "" Aug 14 15:17:01.667: INFO: stdout: "e2e-test-crd-publish-openapi-7925-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Aug 14 15:17:01.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2934 create -f -' Aug 14 15:17:03.262: INFO: rc: 1 Aug 14 15:17:03.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2934 apply -f -' Aug 14 15:17:05.241: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Aug 14 15:17:05.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2934 create -f -' Aug 14 15:17:07.891: INFO: rc: 1 Aug 14 15:17:07.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2934 apply -f -' Aug 14 15:17:10.145: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Aug 14 15:17:10.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7925-crds' Aug 14 15:17:12.088: INFO: stderr: "" Aug 14 15:17:12.089: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7925-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Aug 14 15:17:12.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7925-crds.metadata' Aug 14 15:17:14.111: INFO: stderr: "" Aug 14 15:17:14.111: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7925-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Aug 14 15:17:14.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7925-crds.spec' Aug 14 15:17:15.696: INFO: stderr: "" Aug 14 15:17:15.696: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7925-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Aug 14 15:17:15.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7925-crds.spec.bars' Aug 14 15:17:17.312: INFO: stderr: "" Aug 14 15:17:17.313: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7925-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Aug 14 15:17:17.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7925-crds.spec.bars2' Aug 14 15:17:19.992: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:17:40.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2934" for this suite. • [SLOW TEST:72.683 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":141,"skipped":2457,"failed":0} [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:17:40.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:17:40.353: INFO: Create a RollingUpdate DaemonSet Aug 14 15:17:40.377: INFO: Check that daemon pods launch on every node of the cluster Aug 14 15:17:40.397: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:17:40.412: INFO: Number of nodes with available pods: 0 Aug 14 15:17:40.412: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:17:41.422: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:17:41.428: INFO: Number of nodes with available pods: 0 Aug 14 15:17:41.428: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:17:42.721: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:17:42.726: INFO: Number of nodes with available pods: 0 Aug 14 15:17:42.726: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:17:43.650: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:17:43.717: INFO: Number of nodes with available pods: 0 Aug 14 15:17:43.717: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:17:44.519: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:17:44.525: INFO: Number of nodes with available pods: 0 Aug 14 15:17:44.525: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:17:45.434: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:17:45.439: INFO: Number of nodes with available pods: 1 Aug 14 15:17:45.439: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:17:46.819: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:17:46.824: INFO: Number of nodes with available pods: 2 Aug 14 15:17:46.824: INFO: Number of running nodes: 2, number of available pods: 2 Aug 14 15:17:46.824: INFO: Update the DaemonSet to trigger a rollout Aug 14 15:17:46.834: INFO: Updating DaemonSet daemon-set Aug 14 15:17:53.999: INFO: Roll back the DaemonSet before rollout is complete Aug 14 15:17:54.184: INFO: Updating DaemonSet daemon-set Aug 14 15:17:54.184: INFO: Make sure DaemonSet rollback is complete Aug 14 15:17:54.205: INFO: Wrong image for pod: daemon-set-4vwh8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 14 15:17:54.205: INFO: Pod daemon-set-4vwh8 is not available Aug 14 15:17:54.321: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:17:55.329: INFO: Wrong image for pod: daemon-set-4vwh8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 14 15:17:55.329: INFO: Pod daemon-set-4vwh8 is not available Aug 14 15:17:55.336: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:17:56.379: INFO: Pod daemon-set-qblbt is not available Aug 14 15:17:56.402: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5540, will wait for the garbage collector to delete the pods Aug 14 15:17:56.515: INFO: Deleting DaemonSet.extensions daemon-set took: 7.368873ms Aug 14 15:17:56.815: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.848541ms Aug 14 15:18:03.477: INFO: Number of nodes with available pods: 0 Aug 14 15:18:03.477: INFO: Number of running nodes: 0, number of available pods: 0 Aug 14 15:18:03.480: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5540/daemonsets","resourceVersion":"9553938"},"items":null} Aug 14 15:18:03.483: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5540/pods","resourceVersion":"9553938"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:18:03.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5540" for this suite. • [SLOW TEST:23.348 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":142,"skipped":2457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:18:03.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:18:03.666: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Pending, waiting for it to be Running (with Ready = true) Aug 14 15:18:05.673: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Pending, waiting for it to be Running (with Ready = true) Aug 14 15:18:07.689: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Running (Ready = false) Aug 14 15:18:09.773: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Running (Ready = false) Aug 14 15:18:11.674: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Running (Ready = false) Aug 14 15:18:13.675: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Running (Ready = false) Aug 14 15:18:15.722: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Running (Ready = false) Aug 14 15:18:17.675: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Running (Ready = false) Aug 14 15:18:19.671: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Running (Ready = false) Aug 14 15:18:21.672: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Running (Ready = false) Aug 14 15:18:23.672: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Running (Ready = false) Aug 14 15:18:25.713: INFO: The status of Pod test-webserver-a474d990-4f83-44aa-9aac-839c586d7b7d is Running (Ready = true) Aug 14 15:18:25.718: INFO: Container started at 2020-08-14 15:18:06 +0000 UTC, pod became ready at 2020-08-14 15:18:23 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:18:25.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1200" for this suite. • [SLOW TEST:22.208 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2516,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:18:25.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:18:25.931: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 14 15:18:27.100: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:18:27.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3424" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":144,"skipped":2526,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:18:27.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 15:18:27.731: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29bfe5c7-a8f1-45a0-aa63-1d697a0a7241" in namespace "downward-api-7939" to be "Succeeded or Failed" Aug 14 15:18:27.833: INFO: Pod "downwardapi-volume-29bfe5c7-a8f1-45a0-aa63-1d697a0a7241": Phase="Pending", Reason="", readiness=false. Elapsed: 102.068976ms Aug 14 15:18:30.026: INFO: Pod "downwardapi-volume-29bfe5c7-a8f1-45a0-aa63-1d697a0a7241": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295122264s Aug 14 15:18:32.175: INFO: Pod "downwardapi-volume-29bfe5c7-a8f1-45a0-aa63-1d697a0a7241": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443986885s Aug 14 15:18:34.228: INFO: Pod "downwardapi-volume-29bfe5c7-a8f1-45a0-aa63-1d697a0a7241": Phase="Running", Reason="", readiness=true. Elapsed: 6.496837826s Aug 14 15:18:36.254: INFO: Pod "downwardapi-volume-29bfe5c7-a8f1-45a0-aa63-1d697a0a7241": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.522733014s STEP: Saw pod success Aug 14 15:18:36.254: INFO: Pod "downwardapi-volume-29bfe5c7-a8f1-45a0-aa63-1d697a0a7241" satisfied condition "Succeeded or Failed" Aug 14 15:18:36.271: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-29bfe5c7-a8f1-45a0-aa63-1d697a0a7241 container client-container: STEP: delete the pod Aug 14 15:18:36.534: INFO: Waiting for pod downwardapi-volume-29bfe5c7-a8f1-45a0-aa63-1d697a0a7241 to disappear Aug 14 15:18:36.546: INFO: Pod downwardapi-volume-29bfe5c7-a8f1-45a0-aa63-1d697a0a7241 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:18:36.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7939" for this suite. • [SLOW TEST:9.096 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2561,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:18:36.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:18:36.621: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:18:42.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9661" for this suite. • [SLOW TEST:6.278 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2590,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:18:42.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-972e33fd-89df-47ea-ac72-fb412f6eb28c STEP: Creating a pod to test consume configMaps Aug 14 15:18:43.041: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5bb34431-df57-457f-83d8-bf625d6a267b" in namespace "projected-3079" to be "Succeeded or Failed" Aug 14 15:18:43.069: INFO: Pod "pod-projected-configmaps-5bb34431-df57-457f-83d8-bf625d6a267b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.959066ms Aug 14 15:18:45.074: INFO: Pod "pod-projected-configmaps-5bb34431-df57-457f-83d8-bf625d6a267b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032148656s Aug 14 15:18:47.079: INFO: Pod "pod-projected-configmaps-5bb34431-df57-457f-83d8-bf625d6a267b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037833668s STEP: Saw pod success Aug 14 15:18:47.080: INFO: Pod "pod-projected-configmaps-5bb34431-df57-457f-83d8-bf625d6a267b" satisfied condition "Succeeded or Failed" Aug 14 15:18:47.084: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-5bb34431-df57-457f-83d8-bf625d6a267b container projected-configmap-volume-test: STEP: delete the pod Aug 14 15:18:47.153: INFO: Waiting for pod pod-projected-configmaps-5bb34431-df57-457f-83d8-bf625d6a267b to disappear Aug 14 15:18:47.159: INFO: Pod pod-projected-configmaps-5bb34431-df57-457f-83d8-bf625d6a267b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:18:47.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3079" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:18:47.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5677 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5677 STEP: creating replication controller externalsvc in namespace services-5677 I0814 15:18:47.488246 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5677, replica count: 2 I0814 15:18:50.539300 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0814 15:18:53.539765 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Aug 14 15:18:53.671: INFO: Creating new exec pod Aug 14 15:18:59.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5677 execpodhcdzc -- /bin/sh -x -c nslookup clusterip-service' Aug 14 15:19:01.248: INFO: stderr: "I0814 15:19:01.138621 1660 log.go:172] (0x4000a52000) (0x40007e1220) Create stream\nI0814 15:19:01.142687 1660 log.go:172] (0x4000a52000) (0x40007e1220) Stream added, broadcasting: 1\nI0814 15:19:01.150604 1660 log.go:172] (0x4000a52000) Reply frame received for 1\nI0814 15:19:01.151114 1660 log.go:172] (0x4000a52000) (0x40007e1400) Create stream\nI0814 15:19:01.151165 1660 log.go:172] (0x4000a52000) (0x40007e1400) Stream added, broadcasting: 3\nI0814 15:19:01.152879 1660 log.go:172] (0x4000a52000) Reply frame received for 3\nI0814 15:19:01.153363 1660 log.go:172] (0x4000a52000) (0x4000702000) Create stream\nI0814 15:19:01.153462 1660 log.go:172] (0x4000a52000) (0x4000702000) Stream added, broadcasting: 5\nI0814 15:19:01.155083 1660 log.go:172] (0x4000a52000) Reply frame received for 5\nI0814 15:19:01.225187 1660 log.go:172] (0x4000a52000) Data frame received for 5\nI0814 15:19:01.225561 1660 log.go:172] (0x4000702000) (5) Data frame handling\nI0814 15:19:01.226371 1660 log.go:172] (0x4000702000) (5) Data frame sent\n+ nslookup clusterip-service\nI0814 15:19:01.231506 1660 log.go:172] (0x4000a52000) Data frame received for 3\nI0814 15:19:01.231672 1660 log.go:172] (0x40007e1400) (3) Data frame handling\nI0814 15:19:01.231788 1660 log.go:172] (0x40007e1400) (3) Data frame sent\nI0814 15:19:01.231927 1660 log.go:172] (0x4000a52000) Data frame received for 3\nI0814 15:19:01.231999 1660 log.go:172] (0x40007e1400) (3) Data frame handling\nI0814 15:19:01.232085 1660 log.go:172] (0x40007e1400) (3) Data frame sent\nI0814 15:19:01.232463 1660 log.go:172] (0x4000a52000) Data frame received for 3\nI0814 15:19:01.232574 1660 log.go:172] (0x4000a52000) Data frame received for 5\nI0814 15:19:01.232674 1660 log.go:172] (0x4000702000) (5) Data frame handling\nI0814 15:19:01.232848 1660 log.go:172] (0x40007e1400) (3) Data frame handling\nI0814 15:19:01.233889 1660 log.go:172] (0x4000a52000) Data frame received for 1\nI0814 15:19:01.233944 1660 log.go:172] (0x40007e1220) (1) Data frame handling\nI0814 15:19:01.233992 1660 log.go:172] (0x40007e1220) (1) Data frame sent\nI0814 15:19:01.235996 1660 log.go:172] (0x4000a52000) (0x40007e1220) Stream removed, broadcasting: 1\nI0814 15:19:01.238185 1660 log.go:172] (0x4000a52000) Go away received\nI0814 15:19:01.241298 1660 log.go:172] (0x4000a52000) (0x40007e1220) Stream removed, broadcasting: 1\nI0814 15:19:01.241580 1660 log.go:172] (0x4000a52000) (0x40007e1400) Stream removed, broadcasting: 3\nI0814 15:19:01.241764 1660 log.go:172] (0x4000a52000) (0x4000702000) Stream removed, broadcasting: 5\n" Aug 14 15:19:01.249: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5677.svc.cluster.local\tcanonical name = externalsvc.services-5677.svc.cluster.local.\nName:\texternalsvc.services-5677.svc.cluster.local\nAddress: 10.110.169.160\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5677, will wait for the garbage collector to delete the pods Aug 14 15:19:01.312: INFO: Deleting ReplicationController externalsvc took: 6.551917ms Aug 14 15:19:01.613: INFO: Terminating ReplicationController externalsvc pods took: 300.683922ms Aug 14 15:19:13.487: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:19:13.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5677" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:26.354 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":148,"skipped":2632,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:19:13.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 15:19:14.883: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 15:19:16.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015154, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015154, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015154, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015154, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:19:18.966: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015154, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015154, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015154, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015154, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 15:19:21.928: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:19:24.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3142" for this suite. STEP: Destroying namespace "webhook-3142-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.156 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":149,"skipped":2647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:19:24.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 15:19:25.193: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9a2ff9c-f532-46a4-aee9-e5d514c1b928" in namespace "projected-4271" to be "Succeeded or Failed" Aug 14 15:19:25.210: INFO: Pod "downwardapi-volume-b9a2ff9c-f532-46a4-aee9-e5d514c1b928": Phase="Pending", Reason="", readiness=false. Elapsed: 16.888226ms Aug 14 15:19:27.216: INFO: Pod "downwardapi-volume-b9a2ff9c-f532-46a4-aee9-e5d514c1b928": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0222726s Aug 14 15:19:29.220: INFO: Pod "downwardapi-volume-b9a2ff9c-f532-46a4-aee9-e5d514c1b928": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027062946s Aug 14 15:19:31.234: INFO: Pod "downwardapi-volume-b9a2ff9c-f532-46a4-aee9-e5d514c1b928": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040976768s STEP: Saw pod success Aug 14 15:19:31.235: INFO: Pod "downwardapi-volume-b9a2ff9c-f532-46a4-aee9-e5d514c1b928" satisfied condition "Succeeded or Failed" Aug 14 15:19:31.239: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-b9a2ff9c-f532-46a4-aee9-e5d514c1b928 container client-container: STEP: delete the pod Aug 14 15:19:31.294: INFO: Waiting for pod downwardapi-volume-b9a2ff9c-f532-46a4-aee9-e5d514c1b928 to disappear Aug 14 15:19:31.328: INFO: Pod downwardapi-volume-b9a2ff9c-f532-46a4-aee9-e5d514c1b928 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:19:31.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4271" for this suite. • [SLOW TEST:6.757 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2677,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:19:31.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-3e4d7561-c0ab-4f00-a3c4-46c33b728431 STEP: Creating a pod to test consume configMaps Aug 14 15:19:32.260: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f41ca0a3-dff7-4e30-91c5-bb732ca3215d" in namespace "projected-4653" to be "Succeeded or Failed" Aug 14 15:19:32.503: INFO: Pod "pod-projected-configmaps-f41ca0a3-dff7-4e30-91c5-bb732ca3215d": Phase="Pending", Reason="", readiness=false. Elapsed: 243.176586ms Aug 14 15:19:34.697: INFO: Pod "pod-projected-configmaps-f41ca0a3-dff7-4e30-91c5-bb732ca3215d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.436782912s Aug 14 15:19:36.743: INFO: Pod "pod-projected-configmaps-f41ca0a3-dff7-4e30-91c5-bb732ca3215d": Phase="Running", Reason="", readiness=true. Elapsed: 4.483523477s Aug 14 15:19:38.749: INFO: Pod "pod-projected-configmaps-f41ca0a3-dff7-4e30-91c5-bb732ca3215d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.489415475s STEP: Saw pod success Aug 14 15:19:38.749: INFO: Pod "pod-projected-configmaps-f41ca0a3-dff7-4e30-91c5-bb732ca3215d" satisfied condition "Succeeded or Failed" Aug 14 15:19:38.753: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-f41ca0a3-dff7-4e30-91c5-bb732ca3215d container projected-configmap-volume-test: STEP: delete the pod Aug 14 15:19:38.834: INFO: Waiting for pod pod-projected-configmaps-f41ca0a3-dff7-4e30-91c5-bb732ca3215d to disappear Aug 14 15:19:38.845: INFO: Pod pod-projected-configmaps-f41ca0a3-dff7-4e30-91c5-bb732ca3215d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:19:38.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4653" for this suite. • [SLOW TEST:7.411 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:19:38.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:19:39.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Aug 14 15:19:39.669: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-14T15:19:39Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-14T15:19:39Z]] name:name1 resourceVersion:9554637 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3087e91b-d720-4410-9813-90ccd70cf500] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Aug 14 15:19:49.798: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-14T15:19:49Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-14T15:19:49Z]] name:name2 resourceVersion:9554673 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:fe741103-4b82-44d5-803d-9f870e6e9549] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Aug 14 15:19:59.808: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-14T15:19:39Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-14T15:19:59Z]] name:name1 resourceVersion:9554703 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3087e91b-d720-4410-9813-90ccd70cf500] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Aug 14 15:20:09.882: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-14T15:19:49Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-14T15:20:09Z]] name:name2 resourceVersion:9554733 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:fe741103-4b82-44d5-803d-9f870e6e9549] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Aug 14 15:20:20.053: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-14T15:19:39Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-14T15:19:59Z]] name:name1 resourceVersion:9554757 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3087e91b-d720-4410-9813-90ccd70cf500] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Aug 14 15:20:30.063: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-14T15:19:49Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-14T15:20:09Z]] name:name2 resourceVersion:9554786 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:fe741103-4b82-44d5-803d-9f870e6e9549] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:20:40.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-6998" for this suite. • [SLOW TEST:61.798 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":152,"skipped":2721,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:20:40.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 15:20:43.160: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 15:20:45.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015243, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015243, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015243, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015243, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:20:47.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015243, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015243, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015243, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015243, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 15:20:50.703: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:20:50.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7638" for this suite. STEP: Destroying namespace "webhook-7638-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.364 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":153,"skipped":2722,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:20:51.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:21:02.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9161" for this suite. • [SLOW TEST:11.159 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":154,"skipped":2753,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:21:02.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:21:02.257: INFO: Creating ReplicaSet my-hostname-basic-c9c6a4f3-cb45-4c2c-ae44-22fac8aacbe4 Aug 14 15:21:02.300: INFO: Pod name my-hostname-basic-c9c6a4f3-cb45-4c2c-ae44-22fac8aacbe4: Found 0 pods out of 1 Aug 14 15:21:07.328: INFO: Pod name my-hostname-basic-c9c6a4f3-cb45-4c2c-ae44-22fac8aacbe4: Found 1 pods out of 1 Aug 14 15:21:07.328: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c9c6a4f3-cb45-4c2c-ae44-22fac8aacbe4" is running Aug 14 15:21:07.334: INFO: Pod "my-hostname-basic-c9c6a4f3-cb45-4c2c-ae44-22fac8aacbe4-zbztn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 15:21:02 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 15:21:05 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 15:21:05 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 15:21:02 +0000 UTC Reason: Message:}]) Aug 14 15:21:07.334: INFO: Trying to dial the pod Aug 14 15:21:12.356: INFO: Controller my-hostname-basic-c9c6a4f3-cb45-4c2c-ae44-22fac8aacbe4: Got expected result from replica 1 [my-hostname-basic-c9c6a4f3-cb45-4c2c-ae44-22fac8aacbe4-zbztn]: "my-hostname-basic-c9c6a4f3-cb45-4c2c-ae44-22fac8aacbe4-zbztn", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:21:12.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9773" for this suite. • [SLOW TEST:10.185 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":155,"skipped":2770,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:21:12.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:21:16.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2904" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2796,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:21:16.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 15:21:16.706: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4a48ea0-e27d-43f4-b45a-745013ce2751" in namespace "downward-api-9936" to be "Succeeded or Failed" Aug 14 15:21:16.736: INFO: Pod "downwardapi-volume-f4a48ea0-e27d-43f4-b45a-745013ce2751": Phase="Pending", Reason="", readiness=false. Elapsed: 29.178777ms Aug 14 15:21:18.743: INFO: Pod "downwardapi-volume-f4a48ea0-e27d-43f4-b45a-745013ce2751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036639283s Aug 14 15:21:20.751: INFO: Pod "downwardapi-volume-f4a48ea0-e27d-43f4-b45a-745013ce2751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044497706s STEP: Saw pod success Aug 14 15:21:20.751: INFO: Pod "downwardapi-volume-f4a48ea0-e27d-43f4-b45a-745013ce2751" satisfied condition "Succeeded or Failed" Aug 14 15:21:20.756: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f4a48ea0-e27d-43f4-b45a-745013ce2751 container client-container: STEP: delete the pod Aug 14 15:21:20.834: INFO: Waiting for pod downwardapi-volume-f4a48ea0-e27d-43f4-b45a-745013ce2751 to disappear Aug 14 15:21:20.865: INFO: Pod downwardapi-volume-f4a48ea0-e27d-43f4-b45a-745013ce2751 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:21:20.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9936" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:21:20.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0814 15:21:34.151108 10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 14 15:21:34.151: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:21:34.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1821" for this suite. • [SLOW TEST:13.826 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":158,"skipped":2841,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:21:34.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3182.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3182.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3182.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3182.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3182.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.78.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.78.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.78.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.78.160_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3182.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3182.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3182.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3182.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3182.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.78.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.78.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.78.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.78.160_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 14 15:21:43.990: INFO: Unable to read wheezy_udp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:44.053: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:44.071: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:44.150: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:44.729: INFO: Unable to read jessie_udp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:44.747: INFO: Unable to read jessie_tcp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:44.765: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:44.831: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:44.884: INFO: Lookups using dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016 failed for: [wheezy_udp@dns-test-service.dns-3182.svc.cluster.local wheezy_tcp@dns-test-service.dns-3182.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local jessie_udp@dns-test-service.dns-3182.svc.cluster.local jessie_tcp@dns-test-service.dns-3182.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local] Aug 14 15:21:49.892: INFO: Unable to read wheezy_udp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:49.898: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:49.903: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:49.908: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:49.939: INFO: Unable to read jessie_udp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:49.942: INFO: Unable to read jessie_tcp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:49.946: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:49.950: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:49.979: INFO: Lookups using dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016 failed for: [wheezy_udp@dns-test-service.dns-3182.svc.cluster.local wheezy_tcp@dns-test-service.dns-3182.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local jessie_udp@dns-test-service.dns-3182.svc.cluster.local jessie_tcp@dns-test-service.dns-3182.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local] Aug 14 15:21:55.408: INFO: Unable to read wheezy_udp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:55.727: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:55.808: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:55.813: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:56.222: INFO: Unable to read jessie_udp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:56.230: INFO: Unable to read jessie_tcp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:56.234: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:56.237: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:21:56.260: INFO: Lookups using dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016 failed for: [wheezy_udp@dns-test-service.dns-3182.svc.cluster.local wheezy_tcp@dns-test-service.dns-3182.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local jessie_udp@dns-test-service.dns-3182.svc.cluster.local jessie_tcp@dns-test-service.dns-3182.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local] Aug 14 15:21:59.893: INFO: Unable to read wheezy_udp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:00.237: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:00.241: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:00.246: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:00.285: INFO: Unable to read jessie_udp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:00.289: INFO: Unable to read jessie_tcp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:00.292: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:00.295: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:00.314: INFO: Lookups using dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016 failed for: [wheezy_udp@dns-test-service.dns-3182.svc.cluster.local wheezy_tcp@dns-test-service.dns-3182.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local jessie_udp@dns-test-service.dns-3182.svc.cluster.local jessie_tcp@dns-test-service.dns-3182.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local] Aug 14 15:22:04.915: INFO: Unable to read wheezy_udp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:05.279: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:05.284: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:05.288: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:05.323: INFO: Unable to read jessie_udp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:05.328: INFO: Unable to read jessie_tcp@dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:05.333: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:05.338: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:05.427: INFO: Lookups using dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016 failed for: [wheezy_udp@dns-test-service.dns-3182.svc.cluster.local wheezy_tcp@dns-test-service.dns-3182.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local jessie_udp@dns-test-service.dns-3182.svc.cluster.local jessie_tcp@dns-test-service.dns-3182.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local] Aug 14 15:22:09.904: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:09.951: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local from pod dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016: the server could not find the requested resource (get pods dns-test-a2d4a114-03c2-4d48-9431-5400bb021016) Aug 14 15:22:09.970: INFO: Lookups using dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3182.svc.cluster.local] Aug 14 15:22:15.055: INFO: DNS probes using dns-3182/dns-test-a2d4a114-03c2-4d48-9431-5400bb021016 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:22:15.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3182" for this suite. • [SLOW TEST:41.117 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":159,"skipped":2842,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:22:15.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-16c7ed8b-18ea-4dbd-b2de-5b916ad3db62 STEP: Creating a pod to test consume configMaps Aug 14 15:22:16.555: INFO: Waiting up to 5m0s for pod "pod-configmaps-b0174745-e7b6-419e-b9d7-a5519187a648" in namespace "configmap-6249" to be "Succeeded or Failed" Aug 14 15:22:16.710: INFO: Pod "pod-configmaps-b0174745-e7b6-419e-b9d7-a5519187a648": Phase="Pending", Reason="", readiness=false. Elapsed: 153.91821ms Aug 14 15:22:19.345: INFO: Pod "pod-configmaps-b0174745-e7b6-419e-b9d7-a5519187a648": Phase="Pending", Reason="", readiness=false. Elapsed: 2.789477701s Aug 14 15:22:21.511: INFO: Pod "pod-configmaps-b0174745-e7b6-419e-b9d7-a5519187a648": Phase="Pending", Reason="", readiness=false. Elapsed: 4.95516708s Aug 14 15:22:23.518: INFO: Pod "pod-configmaps-b0174745-e7b6-419e-b9d7-a5519187a648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.962192325s STEP: Saw pod success Aug 14 15:22:23.518: INFO: Pod "pod-configmaps-b0174745-e7b6-419e-b9d7-a5519187a648" satisfied condition "Succeeded or Failed" Aug 14 15:22:23.523: INFO: Trying to get logs from node kali-worker pod pod-configmaps-b0174745-e7b6-419e-b9d7-a5519187a648 container configmap-volume-test: STEP: delete the pod Aug 14 15:22:23.563: INFO: Waiting for pod pod-configmaps-b0174745-e7b6-419e-b9d7-a5519187a648 to disappear Aug 14 15:22:23.629: INFO: Pod pod-configmaps-b0174745-e7b6-419e-b9d7-a5519187a648 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:22:23.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6249" for this suite. • [SLOW TEST:7.815 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2860,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:22:23.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 14 15:22:24.315: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-a dc421278-d6e0-4583-ab4f-15103b37a4c2 9555554 0 2020-08-14 15:22:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-14 15:22:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 14 15:22:24.317: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-a dc421278-d6e0-4583-ab4f-15103b37a4c2 9555554 0 2020-08-14 15:22:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-14 15:22:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 14 15:22:34.328: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-a dc421278-d6e0-4583-ab4f-15103b37a4c2 9555594 0 2020-08-14 15:22:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-14 15:22:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 14 15:22:34.329: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-a dc421278-d6e0-4583-ab4f-15103b37a4c2 9555594 0 2020-08-14 15:22:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-14 15:22:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 14 15:22:44.380: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-a dc421278-d6e0-4583-ab4f-15103b37a4c2 9555625 0 2020-08-14 15:22:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-14 15:22:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 14 15:22:44.381: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-a dc421278-d6e0-4583-ab4f-15103b37a4c2 9555625 0 2020-08-14 15:22:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-14 15:22:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 14 15:22:54.425: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-a dc421278-d6e0-4583-ab4f-15103b37a4c2 9555655 0 2020-08-14 15:22:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-14 15:22:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 14 15:22:54.426: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-a dc421278-d6e0-4583-ab4f-15103b37a4c2 9555655 0 2020-08-14 15:22:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-14 15:22:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 14 15:23:04.439: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-b 42f3688e-f1b9-47da-9ee5-3363b79fd0d9 9555685 0 2020-08-14 15:23:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-14 15:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 14 15:23:04.441: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-b 42f3688e-f1b9-47da-9ee5-3363b79fd0d9 9555685 0 2020-08-14 15:23:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-14 15:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 14 15:23:14.453: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-b 42f3688e-f1b9-47da-9ee5-3363b79fd0d9 9555715 0 2020-08-14 15:23:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-14 15:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 14 15:23:14.453: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9952 /api/v1/namespaces/watch-9952/configmaps/e2e-watch-test-configmap-b 42f3688e-f1b9-47da-9ee5-3363b79fd0d9 9555715 0 2020-08-14 15:23:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-14 15:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:23:24.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9952" for this suite. • [SLOW TEST:60.829 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":161,"skipped":2861,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:23:24.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-0e766620-5607-4705-a235-1142c8a26441 STEP: Creating a pod to test consume secrets Aug 14 15:23:24.585: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a9045ebc-5c02-46ef-a7d4-ad8ab9232b44" in namespace "projected-8446" to be "Succeeded or Failed" Aug 14 15:23:24.598: INFO: Pod "pod-projected-secrets-a9045ebc-5c02-46ef-a7d4-ad8ab9232b44": Phase="Pending", Reason="", readiness=false. Elapsed: 13.543836ms Aug 14 15:23:26.606: INFO: Pod "pod-projected-secrets-a9045ebc-5c02-46ef-a7d4-ad8ab9232b44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021343482s Aug 14 15:23:28.681: INFO: Pod "pod-projected-secrets-a9045ebc-5c02-46ef-a7d4-ad8ab9232b44": Phase="Running", Reason="", readiness=true. Elapsed: 4.096448603s Aug 14 15:23:30.689: INFO: Pod "pod-projected-secrets-a9045ebc-5c02-46ef-a7d4-ad8ab9232b44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104614654s STEP: Saw pod success Aug 14 15:23:30.690: INFO: Pod "pod-projected-secrets-a9045ebc-5c02-46ef-a7d4-ad8ab9232b44" satisfied condition "Succeeded or Failed" Aug 14 15:23:30.695: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-a9045ebc-5c02-46ef-a7d4-ad8ab9232b44 container projected-secret-volume-test: STEP: delete the pod Aug 14 15:23:30.726: INFO: Waiting for pod pod-projected-secrets-a9045ebc-5c02-46ef-a7d4-ad8ab9232b44 to disappear Aug 14 15:23:30.750: INFO: Pod pod-projected-secrets-a9045ebc-5c02-46ef-a7d4-ad8ab9232b44 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:23:30.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8446" for this suite. • [SLOW TEST:6.288 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2889,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:23:30.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:23:30.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9609" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":163,"skipped":2896,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:23:30.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Aug 14 15:23:31.037: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 14 15:23:31.061: INFO: Waiting for terminating namespaces to be deleted... Aug 14 15:23:31.067: INFO: Logging pods the kubelet thinks is on node kali-worker before test Aug 14 15:23:31.088: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.088: INFO: Container kindnet-cni ready: true, restart count 1 Aug 14 15:23:31.088: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.088: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 14 15:23:31.088: INFO: rally-1cd7c17f-v3nmd8vz-589d46bdb9-zmgmx from c-rally-1cd7c17f-mfngbtu6 started at 2020-08-13 20:24:45 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.088: INFO: Container rally-1cd7c17f-v3nmd8vz ready: true, restart count 0 Aug 14 15:23:31.088: INFO: rally-466602a1-db17uwyh-z26cp from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:59:13 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.088: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 14 15:23:31.088: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.088: INFO: Container kube-proxy ready: true, restart count 0 Aug 14 15:23:31.088: INFO: rally-466602a1-db17uwyh-6xgdb from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:51:36 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.088: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 14 15:23:31.088: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.088: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 14 15:23:31.088: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Aug 14 15:23:31.125: INFO: rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh from c-rally-6c5ea4be-pyo3sp3v started at 2020-08-11 18:16:03 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.125: INFO: Container rally-6c5ea4be-96nyoha6 ready: true, restart count 73 Aug 14 15:23:31.125: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.125: INFO: Container kube-proxy ready: true, restart count 0 Aug 14 15:23:31.125: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.125: INFO: Container kindnet-cni ready: true, restart count 1 Aug 14 15:23:31.125: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.125: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 14 15:23:31.125: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.125: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 14 15:23:31.125: INFO: rally-7104017d-j5l4uv4e-0 from c-rally-7104017d-2oejvhl7 started at 2020-08-11 18:51:39 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.125: INFO: Container rally-7104017d-j5l4uv4e ready: true, restart count 1 Aug 14 15:23:31.125: INFO: rally-1cd7c17f-v3nmd8vz-589d46bdb9-h9wtg from c-rally-1cd7c17f-mfngbtu6 started at 2020-08-13 20:24:47 +0000 UTC (1 container statuses recorded) Aug 14 15:23:31.125: INFO: Container rally-1cd7c17f-v3nmd8vz ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162b2bc36c920b0d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.162b2bc36ebaaed9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:23:32.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-557" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":164,"skipped":2896,"failed":0} ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:23:32.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Aug 14 15:23:32.309: INFO: Waiting up to 5m0s for pod "client-containers-60381869-02cf-4421-81ba-97128b812fda" in namespace "containers-4037" to be "Succeeded or Failed" Aug 14 15:23:32.319: INFO: Pod "client-containers-60381869-02cf-4421-81ba-97128b812fda": Phase="Pending", Reason="", readiness=false. Elapsed: 10.105854ms Aug 14 15:23:34.355: INFO: Pod "client-containers-60381869-02cf-4421-81ba-97128b812fda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046290827s Aug 14 15:23:36.363: INFO: Pod "client-containers-60381869-02cf-4421-81ba-97128b812fda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054306454s STEP: Saw pod success Aug 14 15:23:36.363: INFO: Pod "client-containers-60381869-02cf-4421-81ba-97128b812fda" satisfied condition "Succeeded or Failed" Aug 14 15:23:36.368: INFO: Trying to get logs from node kali-worker pod client-containers-60381869-02cf-4421-81ba-97128b812fda container test-container: STEP: delete the pod Aug 14 15:23:36.502: INFO: Waiting for pod client-containers-60381869-02cf-4421-81ba-97128b812fda to disappear Aug 14 15:23:36.534: INFO: Pod client-containers-60381869-02cf-4421-81ba-97128b812fda no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:23:36.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4037" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2896,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:23:36.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Aug 14 15:23:36.700: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 14 15:23:36.735: INFO: Waiting for terminating namespaces to be deleted... Aug 14 15:23:36.764: INFO: Logging pods the kubelet thinks is on node kali-worker before test Aug 14 15:23:36.784: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.784: INFO: Container kindnet-cni ready: true, restart count 1 Aug 14 15:23:36.784: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.784: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 14 15:23:36.784: INFO: rally-1cd7c17f-v3nmd8vz-589d46bdb9-zmgmx from c-rally-1cd7c17f-mfngbtu6 started at 2020-08-13 20:24:45 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.784: INFO: Container rally-1cd7c17f-v3nmd8vz ready: true, restart count 0 Aug 14 15:23:36.784: INFO: rally-466602a1-db17uwyh-z26cp from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:59:13 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.784: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 14 15:23:36.784: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.785: INFO: Container kube-proxy ready: true, restart count 0 Aug 14 15:23:36.785: INFO: rally-466602a1-db17uwyh-6xgdb from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:51:36 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.785: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 14 15:23:36.785: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.785: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 14 15:23:36.785: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Aug 14 15:23:36.813: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.813: INFO: Container kube-proxy ready: true, restart count 0 Aug 14 15:23:36.813: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.813: INFO: Container kindnet-cni ready: true, restart count 1 Aug 14 15:23:36.813: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.813: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 14 15:23:36.813: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.813: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 14 15:23:36.813: INFO: rally-7104017d-j5l4uv4e-0 from c-rally-7104017d-2oejvhl7 started at 2020-08-11 18:51:39 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.813: INFO: Container rally-7104017d-j5l4uv4e ready: true, restart count 1 Aug 14 15:23:36.813: INFO: rally-1cd7c17f-v3nmd8vz-589d46bdb9-h9wtg from c-rally-1cd7c17f-mfngbtu6 started at 2020-08-13 20:24:47 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.813: INFO: Container rally-1cd7c17f-v3nmd8vz ready: true, restart count 0 Aug 14 15:23:36.813: INFO: rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh from c-rally-6c5ea4be-pyo3sp3v started at 2020-08-11 18:16:03 +0000 UTC (1 container statuses recorded) Aug 14 15:23:36.813: INFO: Container rally-6c5ea4be-96nyoha6 ready: true, restart count 73 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 Aug 14 15:23:39.028: INFO: Pod rally-19e4df10-30wkw9yu-glqpf requesting resource cpu=0m on Node kali-worker Aug 14 15:23:39.028: INFO: Pod rally-19e4df10-30wkw9yu-qbmr7 requesting resource cpu=0m on Node kali-worker2 Aug 14 15:23:39.028: INFO: Pod rally-1cd7c17f-v3nmd8vz-589d46bdb9-h9wtg requesting resource cpu=0m on Node kali-worker2 Aug 14 15:23:39.028: INFO: Pod rally-1cd7c17f-v3nmd8vz-589d46bdb9-zmgmx requesting resource cpu=0m on Node kali-worker Aug 14 15:23:39.029: INFO: Pod rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh requesting resource cpu=0m on Node kali-worker2 Aug 14 15:23:39.029: INFO: Pod rally-7104017d-j5l4uv4e-0 requesting resource cpu=0m on Node kali-worker2 Aug 14 15:23:39.029: INFO: Pod rally-824618b1-6cukkjuh-lb7rq requesting resource cpu=0m on Node kali-worker Aug 14 15:23:39.029: INFO: Pod rally-824618b1-6cukkjuh-m84l4 requesting resource cpu=0m on Node kali-worker2 Aug 14 15:23:39.029: INFO: Pod kindnet-njbgt requesting resource cpu=100m on Node kali-worker Aug 14 15:23:39.029: INFO: Pod kindnet-pk4xb requesting resource cpu=100m on Node kali-worker2 Aug 14 15:23:39.029: INFO: Pod kube-proxy-qwsfx requesting resource cpu=0m on Node kali-worker Aug 14 15:23:39.029: INFO: Pod kube-proxy-vk6jr requesting resource cpu=0m on Node kali-worker2 STEP: Starting Pods to consume most of the cluster CPU. Aug 14 15:23:39.029: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker Aug 14 15:23:39.039: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-888975e7-1ec2-4fe1-8efe-0fe24722fa3d.162b2bc5421c2c2e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7707/filler-pod-888975e7-1ec2-4fe1-8efe-0fe24722fa3d to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-888975e7-1ec2-4fe1-8efe-0fe24722fa3d.162b2bc5b93cde61], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-888975e7-1ec2-4fe1-8efe-0fe24722fa3d.162b2bc64674280f], Reason = [Created], Message = [Created container filler-pod-888975e7-1ec2-4fe1-8efe-0fe24722fa3d] STEP: Considering event: Type = [Normal], Name = [filler-pod-888975e7-1ec2-4fe1-8efe-0fe24722fa3d.162b2bc65d5f0170], Reason = [Started], Message = [Started container filler-pod-888975e7-1ec2-4fe1-8efe-0fe24722fa3d] STEP: Considering event: Type = [Normal], Name = [filler-pod-897d16b3-424a-4741-a001-ddc8e9cd2832.162b2bc545ad941c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7707/filler-pod-897d16b3-424a-4741-a001-ddc8e9cd2832 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-897d16b3-424a-4741-a001-ddc8e9cd2832.162b2bc5ec490e19], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-897d16b3-424a-4741-a001-ddc8e9cd2832.162b2bc655e5f777], Reason = [Created], Message = [Created container filler-pod-897d16b3-424a-4741-a001-ddc8e9cd2832] STEP: Considering event: Type = [Normal], Name = [filler-pod-897d16b3-424a-4741-a001-ddc8e9cd2832.162b2bc66599d143], Reason = [Started], Message = [Started container filler-pod-897d16b3-424a-4741-a001-ddc8e9cd2832] STEP: Considering event: Type = [Warning], Name = [additional-pod.162b2bc6ada9c9aa], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.162b2bc6b23622d3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:23:46.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7707" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:9.664 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":166,"skipped":2913,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:23:46.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 14 15:23:50.382: INFO: &Pod{ObjectMeta:{send-events-288ae449-4238-4deb-918e-7ca39c7187d2 events-2040 /api/v1/namespaces/events-2040/pods/send-events-288ae449-4238-4deb-918e-7ca39c7187d2 7d91609d-5fce-471e-ad5d-b47c888a3730 9555934 0 2020-08-14 15:23:46 +0000 UTC map[name:foo time:328059597] map[] [] [] [{e2e.test Update v1 2020-08-14 15:23:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 15:23:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8h6k2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8h6k2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8h6k2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:23:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:23:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:23:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:23:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.111,StartTime:2020-08-14 15:23:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 15:23:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://d5db216f28c13b1072ac963ecb3f12a413ea60404a042a5f24d880a4da069db8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.111,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Aug 14 15:23:52.394: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 14 15:23:54.404: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:23:54.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2040" for this suite. • [SLOW TEST:8.294 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":167,"skipped":2932,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:23:54.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-8bece681-e1ba-44f6-8b8d-defcac2549c9 Aug 14 15:23:54.662: INFO: Pod name my-hostname-basic-8bece681-e1ba-44f6-8b8d-defcac2549c9: Found 0 pods out of 1 Aug 14 15:23:59.688: INFO: Pod name my-hostname-basic-8bece681-e1ba-44f6-8b8d-defcac2549c9: Found 1 pods out of 1 Aug 14 15:23:59.688: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-8bece681-e1ba-44f6-8b8d-defcac2549c9" are running Aug 14 15:23:59.758: INFO: Pod "my-hostname-basic-8bece681-e1ba-44f6-8b8d-defcac2549c9-kchl8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 15:23:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 15:23:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 15:23:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-14 15:23:54 +0000 UTC Reason: Message:}]) Aug 14 15:23:59.759: INFO: Trying to dial the pod Aug 14 15:24:04.781: INFO: Controller my-hostname-basic-8bece681-e1ba-44f6-8b8d-defcac2549c9: Got expected result from replica 1 [my-hostname-basic-8bece681-e1ba-44f6-8b8d-defcac2549c9-kchl8]: "my-hostname-basic-8bece681-e1ba-44f6-8b8d-defcac2549c9-kchl8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:24:04.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5923" for this suite. • [SLOW TEST:10.252 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":168,"skipped":2946,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:24:04.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-c4848447-0aab-433a-93bf-f741f58f6e14 STEP: Creating a pod to test consume configMaps Aug 14 15:24:05.329: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1692ac11-6f77-4502-973f-195700be2c68" in namespace "projected-3978" to be "Succeeded or Failed" Aug 14 15:24:05.467: INFO: Pod "pod-projected-configmaps-1692ac11-6f77-4502-973f-195700be2c68": Phase="Pending", Reason="", readiness=false. Elapsed: 138.158163ms Aug 14 15:24:07.474: INFO: Pod "pod-projected-configmaps-1692ac11-6f77-4502-973f-195700be2c68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144648526s Aug 14 15:24:09.480: INFO: Pod "pod-projected-configmaps-1692ac11-6f77-4502-973f-195700be2c68": Phase="Running", Reason="", readiness=true. Elapsed: 4.15105964s Aug 14 15:24:11.486: INFO: Pod "pod-projected-configmaps-1692ac11-6f77-4502-973f-195700be2c68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157437022s STEP: Saw pod success Aug 14 15:24:11.487: INFO: Pod "pod-projected-configmaps-1692ac11-6f77-4502-973f-195700be2c68" satisfied condition "Succeeded or Failed" Aug 14 15:24:11.491: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-1692ac11-6f77-4502-973f-195700be2c68 container projected-configmap-volume-test: STEP: delete the pod Aug 14 15:24:11.586: INFO: Waiting for pod pod-projected-configmaps-1692ac11-6f77-4502-973f-195700be2c68 to disappear Aug 14 15:24:11.594: INFO: Pod pod-projected-configmaps-1692ac11-6f77-4502-973f-195700be2c68 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:24:11.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3978" for this suite. • [SLOW TEST:6.810 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":2963,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:24:11.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:24:19.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5851" for this suite. • [SLOW TEST:8.174 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2988,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:24:19.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c2badeae-773b-4f5b-856c-26b0e95ff53a STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c2badeae-773b-4f5b-856c-26b0e95ff53a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:24:25.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5977" for this suite. • [SLOW TEST:6.218 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":3004,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:24:26.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 15:24:26.083: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c06bc713-2de3-475b-b467-f40815448a67" in namespace "projected-4831" to be "Succeeded or Failed" Aug 14 15:24:26.105: INFO: Pod "downwardapi-volume-c06bc713-2de3-475b-b467-f40815448a67": Phase="Pending", Reason="", readiness=false. Elapsed: 21.899374ms Aug 14 15:24:28.113: INFO: Pod "downwardapi-volume-c06bc713-2de3-475b-b467-f40815448a67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02961295s Aug 14 15:24:30.141: INFO: Pod "downwardapi-volume-c06bc713-2de3-475b-b467-f40815448a67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058128217s STEP: Saw pod success Aug 14 15:24:30.142: INFO: Pod "downwardapi-volume-c06bc713-2de3-475b-b467-f40815448a67" satisfied condition "Succeeded or Failed" Aug 14 15:24:30.147: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-c06bc713-2de3-475b-b467-f40815448a67 container client-container: STEP: delete the pod Aug 14 15:24:30.173: INFO: Waiting for pod downwardapi-volume-c06bc713-2de3-475b-b467-f40815448a67 to disappear Aug 14 15:24:30.198: INFO: Pod downwardapi-volume-c06bc713-2de3-475b-b467-f40815448a67 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:24:30.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4831" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":3024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:24:30.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:24:30.353: INFO: Creating deployment "test-recreate-deployment" Aug 14 15:24:30.363: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 14 15:24:30.442: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 14 15:24:32.545: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 14 15:24:32.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015470, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015470, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015470, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015470, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:24:34.560: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 14 15:24:34.591: INFO: Updating deployment test-recreate-deployment Aug 14 15:24:34.591: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 14 15:24:35.850: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3566 /apis/apps/v1/namespaces/deployment-3566/deployments/test-recreate-deployment c7fdd8a3-61bf-4d1d-aeaf-eaed85274adf 9556270 2 2020-08-14 15:24:30 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-14 15:24:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-14 15:24:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003adc478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-14 15:24:35 +0000 UTC,LastTransitionTime:2020-08-14 15:24:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-08-14 15:24:35 +0000 UTC,LastTransitionTime:2020-08-14 15:24:30 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Aug 14 15:24:35.861: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-3566 /apis/apps/v1/namespaces/deployment-3566/replicasets/test-recreate-deployment-d5667d9c7 d9002ad6-cad1-4e4e-8f76-ab6ebb34efbb 9556265 1 2020-08-14 15:24:35 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment c7fdd8a3-61bf-4d1d-aeaf-eaed85274adf 0x4003adc980 0x4003adc981}] [] [{kube-controller-manager Update apps/v1 2020-08-14 15:24:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 55 102 100 100 56 97 51 45 54 49 98 102 45 52 100 49 100 45 97 101 97 102 45 101 97 101 100 56 53 50 55 52 97 100 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003adc9f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 14 15:24:35.861: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 14 15:24:35.862: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c deployment-3566 /apis/apps/v1/namespaces/deployment-3566/replicasets/test-recreate-deployment-74d98b5f7c 15241553-538d-465d-b4fe-241f9c0c10a2 9556256 2 2020-08-14 15:24:30 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment c7fdd8a3-61bf-4d1d-aeaf-eaed85274adf 0x4003adc887 0x4003adc888}] [] [{kube-controller-manager Update apps/v1 2020-08-14 15:24:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 55 102 100 100 56 97 51 45 54 49 98 102 45 52 100 49 100 45 97 101 97 102 45 101 97 101 100 56 53 50 55 52 97 100 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003adc918 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 14 15:24:35.873: INFO: Pod "test-recreate-deployment-d5667d9c7-b8qkk" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-b8qkk test-recreate-deployment-d5667d9c7- deployment-3566 /api/v1/namespaces/deployment-3566/pods/test-recreate-deployment-d5667d9c7-b8qkk c6fbb61c-9b64-4c0c-8585-d7a1a9b2cbc1 9556268 0 2020-08-14 15:24:35 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 d9002ad6-cad1-4e4e-8f76-ab6ebb34efbb 0x4003adcec0 0x4003adcec1}] [] [{kube-controller-manager Update v1 2020-08-14 15:24:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 57 48 48 50 97 100 54 45 99 97 100 49 45 52 101 52 101 45 56 102 55 54 45 97 98 54 101 98 98 51 52 101 102 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 15:24:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dmphd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dmphd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dmphd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:24:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:24:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:24:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:24:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-14 15:24:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:24:35.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3566" for this suite. • [SLOW TEST:6.041 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":173,"skipped":3049,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:24:36.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2529 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-2529 Aug 14 15:24:36.718: INFO: Found 0 stateful pods, waiting for 1 Aug 14 15:24:46.726: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 14 15:24:46.805: INFO: Deleting all statefulset in ns statefulset-2529 Aug 14 15:24:46.885: INFO: Scaling statefulset ss to 0 Aug 14 15:25:07.048: INFO: Waiting for statefulset status.replicas updated to 0 Aug 14 15:25:07.053: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:25:07.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2529" for this suite. • [SLOW TEST:30.821 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":174,"skipped":3069,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:25:07.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 15:25:07.187: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eabd10f7-4b7b-4d8a-883d-0a64c7ca8b95" in namespace "downward-api-2224" to be "Succeeded or Failed" Aug 14 15:25:07.193: INFO: Pod "downwardapi-volume-eabd10f7-4b7b-4d8a-883d-0a64c7ca8b95": Phase="Pending", Reason="", readiness=false. Elapsed: 5.493925ms Aug 14 15:25:09.382: INFO: Pod "downwardapi-volume-eabd10f7-4b7b-4d8a-883d-0a64c7ca8b95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194702092s Aug 14 15:25:11.389: INFO: Pod "downwardapi-volume-eabd10f7-4b7b-4d8a-883d-0a64c7ca8b95": Phase="Running", Reason="", readiness=true. Elapsed: 4.202149628s Aug 14 15:25:13.397: INFO: Pod "downwardapi-volume-eabd10f7-4b7b-4d8a-883d-0a64c7ca8b95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209930103s STEP: Saw pod success Aug 14 15:25:13.397: INFO: Pod "downwardapi-volume-eabd10f7-4b7b-4d8a-883d-0a64c7ca8b95" satisfied condition "Succeeded or Failed" Aug 14 15:25:13.403: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-eabd10f7-4b7b-4d8a-883d-0a64c7ca8b95 container client-container: STEP: delete the pod Aug 14 15:25:13.440: INFO: Waiting for pod downwardapi-volume-eabd10f7-4b7b-4d8a-883d-0a64c7ca8b95 to disappear Aug 14 15:25:13.471: INFO: Pod downwardapi-volume-eabd10f7-4b7b-4d8a-883d-0a64c7ca8b95 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:25:13.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2224" for this suite. • [SLOW TEST:6.438 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":3084,"failed":0} SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:25:13.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-e96d56d1-89fc-499a-8ea3-851a6f8fdd20 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-e96d56d1-89fc-499a-8ea3-851a6f8fdd20 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:25:21.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8697" for this suite. • [SLOW TEST:8.330 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":3086,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:25:21.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 15:25:23.810: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 15:25:26.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015523, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015523, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015523, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015523, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:25:28.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015523, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015523, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015523, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015523, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 15:25:31.610: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:25:31.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8339" for this suite. STEP: Destroying namespace "webhook-8339-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.234 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":177,"skipped":3095,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:25:32.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:25:32.246: INFO: Waiting up to 5m0s for pod "busybox-user-65534-efa0b057-5dd1-446b-a936-5204003476ac" in namespace "security-context-test-2877" to be "Succeeded or Failed" Aug 14 15:25:32.252: INFO: Pod "busybox-user-65534-efa0b057-5dd1-446b-a936-5204003476ac": Phase="Pending", Reason="", readiness=false. Elapsed: 5.705677ms Aug 14 15:25:34.622: INFO: Pod "busybox-user-65534-efa0b057-5dd1-446b-a936-5204003476ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375658342s Aug 14 15:25:36.641: INFO: Pod "busybox-user-65534-efa0b057-5dd1-446b-a936-5204003476ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394313221s Aug 14 15:25:39.074: INFO: Pod "busybox-user-65534-efa0b057-5dd1-446b-a936-5204003476ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.827505521s Aug 14 15:25:39.074: INFO: Pod "busybox-user-65534-efa0b057-5dd1-446b-a936-5204003476ac" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:25:39.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2877" for this suite. • [SLOW TEST:7.717 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:25:39.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Aug 14 15:25:40.166: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:27:38.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2377" for this suite. • [SLOW TEST:119.139 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":179,"skipped":3214,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:27:38.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-80102c45-e402-4bdd-9e4b-464e27bdbfa1 STEP: Creating secret with name s-test-opt-upd-a8c1a7de-e335-44f7-b554-e32894b48142 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-80102c45-e402-4bdd-9e4b-464e27bdbfa1 STEP: Updating secret s-test-opt-upd-a8c1a7de-e335-44f7-b554-e32894b48142 STEP: Creating secret with name s-test-opt-create-148056a5-57b0-4801-a3d3-4ce5b16c7fe7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:29:12.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3137" for this suite. • [SLOW TEST:93.576 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3227,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:29:12.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-5e81fee0-5edf-4db8-901d-69d2209cccc5 STEP: Creating a pod to test consume configMaps Aug 14 15:29:12.647: INFO: Waiting up to 5m0s for pod "pod-configmaps-74b70acc-4ae1-4814-9116-0ac9fd79b7ec" in namespace "configmap-4740" to be "Succeeded or Failed" Aug 14 15:29:12.651: INFO: Pod "pod-configmaps-74b70acc-4ae1-4814-9116-0ac9fd79b7ec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.954664ms Aug 14 15:29:14.870: INFO: Pod "pod-configmaps-74b70acc-4ae1-4814-9116-0ac9fd79b7ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222512102s Aug 14 15:29:16.878: INFO: Pod "pod-configmaps-74b70acc-4ae1-4814-9116-0ac9fd79b7ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230406885s Aug 14 15:29:18.902: INFO: Pod "pod-configmaps-74b70acc-4ae1-4814-9116-0ac9fd79b7ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.255315651s STEP: Saw pod success Aug 14 15:29:18.903: INFO: Pod "pod-configmaps-74b70acc-4ae1-4814-9116-0ac9fd79b7ec" satisfied condition "Succeeded or Failed" Aug 14 15:29:18.919: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-74b70acc-4ae1-4814-9116-0ac9fd79b7ec container configmap-volume-test: STEP: delete the pod Aug 14 15:29:19.098: INFO: Waiting for pod pod-configmaps-74b70acc-4ae1-4814-9116-0ac9fd79b7ec to disappear Aug 14 15:29:19.103: INFO: Pod pod-configmaps-74b70acc-4ae1-4814-9116-0ac9fd79b7ec no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:29:19.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4740" for this suite. • [SLOW TEST:6.585 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3239,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:29:19.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:29:19.348: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-39b0654b-db23-484d-b6d3-fa53c38f7933" in namespace "security-context-test-3008" to be "Succeeded or Failed" Aug 14 15:29:19.660: INFO: Pod "busybox-privileged-false-39b0654b-db23-484d-b6d3-fa53c38f7933": Phase="Pending", Reason="", readiness=false. Elapsed: 312.270456ms Aug 14 15:29:21.671: INFO: Pod "busybox-privileged-false-39b0654b-db23-484d-b6d3-fa53c38f7933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322734266s Aug 14 15:29:23.678: INFO: Pod "busybox-privileged-false-39b0654b-db23-484d-b6d3-fa53c38f7933": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330147545s Aug 14 15:29:25.684: INFO: Pod "busybox-privileged-false-39b0654b-db23-484d-b6d3-fa53c38f7933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.336504637s Aug 14 15:29:25.685: INFO: Pod "busybox-privileged-false-39b0654b-db23-484d-b6d3-fa53c38f7933" satisfied condition "Succeeded or Failed" Aug 14 15:29:25.692: INFO: Got logs for pod "busybox-privileged-false-39b0654b-db23-484d-b6d3-fa53c38f7933": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:29:25.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3008" for this suite. • [SLOW TEST:6.591 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3242,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:29:25.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:29:25.834: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 14 15:29:30.844: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 14 15:29:30.844: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 14 15:29:32.851: INFO: Creating deployment "test-rollover-deployment" Aug 14 15:29:32.922: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 14 15:29:35.201: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 14 15:29:35.214: INFO: Ensure that both replica sets have 1 created replica Aug 14 15:29:35.226: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 14 15:29:35.237: INFO: Updating deployment test-rollover-deployment Aug 14 15:29:35.237: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 14 15:29:37.395: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 14 15:29:37.709: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 14 15:29:37.720: INFO: all replica sets need to contain the pod-template-hash label Aug 14 15:29:37.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015777, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:29:39.740: INFO: all replica sets need to contain the pod-template-hash label Aug 14 15:29:39.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015777, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:29:41.739: INFO: all replica sets need to contain the pod-template-hash label Aug 14 15:29:41.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015780, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:29:43.738: INFO: all replica sets need to contain the pod-template-hash label Aug 14 15:29:43.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015780, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:29:45.737: INFO: all replica sets need to contain the pod-template-hash label Aug 14 15:29:45.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015780, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:29:47.737: INFO: all replica sets need to contain the pod-template-hash label Aug 14 15:29:47.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015780, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:29:49.736: INFO: all replica sets need to contain the pod-template-hash label Aug 14 15:29:49.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015780, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733015773, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:29:51.735: INFO: Aug 14 15:29:51.736: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 14 15:29:51.748: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9735 /apis/apps/v1/namespaces/deployment-9735/deployments/test-rollover-deployment d72adc99-2fe0-4aa6-bc36-2633b734e2ff 9557607 2 2020-08-14 15:29:32 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-14 15:29:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-14 15:29:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003913888 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-14 15:29:33 +0000 UTC,LastTransitionTime:2020-08-14 15:29:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-08-14 15:29:51 +0000 UTC,LastTransitionTime:2020-08-14 15:29:33 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 14 15:29:51.756: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b deployment-9735 /apis/apps/v1/namespaces/deployment-9735/replicasets/test-rollover-deployment-84f7f6f64b 9cf068bc-6fa3-456a-bf37-048ae80dcb45 9557595 2 2020-08-14 15:29:35 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment d72adc99-2fe0-4aa6-bc36-2633b734e2ff 0x4003913eb7 0x4003913eb8}] [] [{kube-controller-manager Update apps/v1 2020-08-14 15:29:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 55 50 97 100 99 57 57 45 50 102 101 48 45 52 97 97 54 45 98 99 51 54 45 50 54 51 51 98 55 51 52 101 50 102 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003913f48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 14 15:29:51.757: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 14 15:29:51.758: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9735 /apis/apps/v1/namespaces/deployment-9735/replicasets/test-rollover-controller 6056a176-bd2c-45dd-bc1d-df90f956614c 9557606 2 2020-08-14 15:29:25 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment d72adc99-2fe0-4aa6-bc36-2633b734e2ff 0x4003913ca7 0x4003913ca8}] [] [{e2e.test Update apps/v1 2020-08-14 15:29:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-14 15:29:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 55 50 97 100 99 57 57 45 50 102 101 48 45 52 97 97 54 45 98 99 51 54 45 50 54 51 51 98 55 51 52 101 50 102 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4003913d48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 14 15:29:51.759: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-9735 /apis/apps/v1/namespaces/deployment-9735/replicasets/test-rollover-deployment-5686c4cfd5 c9b81c64-df00-4afd-9d31-09fb4853b121 9557547 2 2020-08-14 15:29:32 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment d72adc99-2fe0-4aa6-bc36-2633b734e2ff 0x4003913db7 0x4003913db8}] [] [{kube-controller-manager Update apps/v1 2020-08-14 15:29:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 55 50 97 100 99 57 57 45 50 102 101 48 45 52 97 97 54 45 98 99 51 54 45 50 54 51 51 98 55 51 52 101 50 102 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003913e48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 14 15:29:51.768: INFO: Pod "test-rollover-deployment-84f7f6f64b-9rtrl" is available: &Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-9rtrl test-rollover-deployment-84f7f6f64b- deployment-9735 /api/v1/namespaces/deployment-9735/pods/test-rollover-deployment-84f7f6f64b-9rtrl 56c57a85-0a54-484b-971c-a926fd065c67 9557561 0 2020-08-14 15:29:36 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 9cf068bc-6fa3-456a-bf37-048ae80dcb45 0x400582e4f7 0x400582e4f8}] [] [{kube-controller-manager Update v1 2020-08-14 15:29:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 99 102 48 54 56 98 99 45 54 102 97 51 45 52 53 54 97 45 98 102 51 55 45 48 52 56 97 101 56 48 100 99 98 52 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-14 15:29:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 50 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b4d6z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b4d6z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b4d6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:29:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:29:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:29:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-14 15:29:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.125,StartTime:2020-08-14 15:29:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-14 15:29:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://cb6e484b15fff5cb325f88140c548a8e338211a23e3329141b3a9a56d2f694aa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:29:51.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9735" for this suite. • [SLOW TEST:26.086 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":183,"skipped":3255,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:29:51.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3516 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Aug 14 15:29:52.205: INFO: Found 0 stateful pods, waiting for 3 Aug 14 15:30:02.249: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 14 15:30:02.249: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 14 15:30:02.249: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 14 15:30:12.214: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 14 15:30:12.214: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 14 15:30:12.214: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 14 15:30:12.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3516 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 14 15:30:16.773: INFO: stderr: "I0814 15:30:16.589863 1683 log.go:172] (0x400003a580) (0x4000550aa0) Create stream\nI0814 15:30:16.594375 1683 log.go:172] (0x400003a580) (0x4000550aa0) Stream added, broadcasting: 1\nI0814 15:30:16.604414 1683 log.go:172] (0x400003a580) Reply frame received for 1\nI0814 15:30:16.606030 1683 log.go:172] (0x400003a580) (0x4000744000) Create stream\nI0814 15:30:16.606171 1683 log.go:172] (0x400003a580) (0x4000744000) Stream added, broadcasting: 3\nI0814 15:30:16.608092 1683 log.go:172] (0x400003a580) Reply frame received for 3\nI0814 15:30:16.608488 1683 log.go:172] (0x400003a580) (0x40008e40a0) Create stream\nI0814 15:30:16.608569 1683 log.go:172] (0x400003a580) (0x40008e40a0) Stream added, broadcasting: 5\nI0814 15:30:16.610170 1683 log.go:172] (0x400003a580) Reply frame received for 5\nI0814 15:30:16.705870 1683 log.go:172] (0x400003a580) Data frame received for 5\nI0814 15:30:16.706090 1683 log.go:172] (0x40008e40a0) (5) Data frame handling\nI0814 15:30:16.706558 1683 log.go:172] (0x40008e40a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0814 15:30:16.748151 1683 log.go:172] (0x400003a580) Data frame received for 3\nI0814 15:30:16.748323 1683 log.go:172] (0x400003a580) Data frame received for 5\nI0814 15:30:16.748580 1683 log.go:172] (0x40008e40a0) (5) Data frame handling\nI0814 15:30:16.748967 1683 log.go:172] (0x4000744000) (3) Data frame handling\nI0814 15:30:16.749124 1683 log.go:172] (0x4000744000) (3) Data frame sent\nI0814 15:30:16.749288 1683 log.go:172] (0x400003a580) Data frame received for 3\nI0814 15:30:16.749479 1683 log.go:172] (0x4000744000) (3) Data frame handling\nI0814 15:30:16.750897 1683 log.go:172] (0x400003a580) Data frame received for 1\nI0814 15:30:16.751110 1683 log.go:172] (0x4000550aa0) (1) Data frame handling\nI0814 15:30:16.751319 1683 log.go:172] (0x4000550aa0) (1) Data frame sent\nI0814 15:30:16.753221 1683 log.go:172] (0x400003a580) (0x4000550aa0) Stream removed, broadcasting: 1\nI0814 15:30:16.755975 1683 log.go:172] (0x400003a580) Go away received\nI0814 15:30:16.760192 1683 log.go:172] (0x400003a580) (0x4000550aa0) Stream removed, broadcasting: 1\nI0814 15:30:16.760553 1683 log.go:172] (0x400003a580) (0x4000744000) Stream removed, broadcasting: 3\nI0814 15:30:16.760860 1683 log.go:172] (0x400003a580) (0x40008e40a0) Stream removed, broadcasting: 5\n" Aug 14 15:30:16.774: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 14 15:30:16.774: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 14 15:30:26.819: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 14 15:30:36.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3516 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 14 15:30:38.349: INFO: stderr: "I0814 15:30:38.225303 1717 log.go:172] (0x4000a400b0) (0x4000918140) Create stream\nI0814 15:30:38.227689 1717 log.go:172] (0x4000a400b0) (0x4000918140) Stream added, broadcasting: 1\nI0814 15:30:38.240093 1717 log.go:172] (0x4000a400b0) Reply frame received for 1\nI0814 15:30:38.240958 1717 log.go:172] (0x4000a400b0) (0x4000b84000) Create stream\nI0814 15:30:38.241034 1717 log.go:172] (0x4000a400b0) (0x4000b84000) Stream added, broadcasting: 3\nI0814 15:30:38.243006 1717 log.go:172] (0x4000a400b0) Reply frame received for 3\nI0814 15:30:38.243490 1717 log.go:172] (0x4000a400b0) (0x4000918320) Create stream\nI0814 15:30:38.243604 1717 log.go:172] (0x4000a400b0) (0x4000918320) Stream added, broadcasting: 5\nI0814 15:30:38.245184 1717 log.go:172] (0x4000a400b0) Reply frame received for 5\nI0814 15:30:38.324273 1717 log.go:172] (0x4000a400b0) Data frame received for 3\nI0814 15:30:38.324575 1717 log.go:172] (0x4000a400b0) Data frame received for 5\nI0814 15:30:38.324678 1717 log.go:172] (0x4000918320) (5) Data frame handling\nI0814 15:30:38.325063 1717 log.go:172] (0x4000b84000) (3) Data frame handling\nI0814 15:30:38.325282 1717 log.go:172] (0x4000a400b0) Data frame received for 1\nI0814 15:30:38.325409 1717 log.go:172] (0x4000918320) (5) Data frame sent\nI0814 15:30:38.325696 1717 log.go:172] (0x4000b84000) (3) Data frame sent\nI0814 15:30:38.326062 1717 log.go:172] (0x4000918140) (1) Data frame handling\nI0814 15:30:38.326258 1717 log.go:172] (0x4000918140) (1) Data frame sent\nI0814 15:30:38.326378 1717 log.go:172] (0x4000a400b0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0814 15:30:38.326505 1717 log.go:172] (0x4000b84000) (3) Data frame handling\nI0814 15:30:38.326673 1717 log.go:172] (0x4000a400b0) Data frame received for 5\nI0814 15:30:38.326772 1717 log.go:172] (0x4000918320) (5) Data frame handling\nI0814 15:30:38.328687 1717 log.go:172] (0x4000a400b0) (0x4000918140) Stream removed, broadcasting: 1\nI0814 15:30:38.331084 1717 log.go:172] (0x4000a400b0) Go away received\nI0814 15:30:38.333870 1717 log.go:172] (0x4000a400b0) (0x4000918140) Stream removed, broadcasting: 1\nI0814 15:30:38.334308 1717 log.go:172] (0x4000a400b0) (0x4000b84000) Stream removed, broadcasting: 3\nI0814 15:30:38.335008 1717 log.go:172] (0x4000a400b0) (0x4000918320) Stream removed, broadcasting: 5\n" Aug 14 15:30:38.350: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 14 15:30:38.350: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 14 15:30:58.387: INFO: Waiting for StatefulSet statefulset-3516/ss2 to complete update Aug 14 15:30:58.388: INFO: Waiting for Pod statefulset-3516/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Aug 14 15:31:08.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3516 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 14 15:31:09.924: INFO: stderr: "I0814 15:31:09.771728 1741 log.go:172] (0x4000a68d10) (0x40005ee1e0) Create stream\nI0814 15:31:09.777529 1741 log.go:172] (0x4000a68d10) (0x40005ee1e0) Stream added, broadcasting: 1\nI0814 15:31:09.786735 1741 log.go:172] (0x4000a68d10) Reply frame received for 1\nI0814 15:31:09.787279 1741 log.go:172] (0x4000a68d10) (0x400058caa0) Create stream\nI0814 15:31:09.787338 1741 log.go:172] (0x4000a68d10) (0x400058caa0) Stream added, broadcasting: 3\nI0814 15:31:09.789323 1741 log.go:172] (0x4000a68d10) Reply frame received for 3\nI0814 15:31:09.789853 1741 log.go:172] (0x4000a68d10) (0x40005ee280) Create stream\nI0814 15:31:09.789966 1741 log.go:172] (0x4000a68d10) (0x40005ee280) Stream added, broadcasting: 5\nI0814 15:31:09.791534 1741 log.go:172] (0x4000a68d10) Reply frame received for 5\nI0814 15:31:09.855606 1741 log.go:172] (0x4000a68d10) Data frame received for 5\nI0814 15:31:09.855942 1741 log.go:172] (0x40005ee280) (5) Data frame handling\nI0814 15:31:09.856871 1741 log.go:172] (0x40005ee280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0814 15:31:09.901780 1741 log.go:172] (0x4000a68d10) Data frame received for 3\nI0814 15:31:09.901992 1741 log.go:172] (0x400058caa0) (3) Data frame handling\nI0814 15:31:09.902186 1741 log.go:172] (0x4000a68d10) Data frame received for 5\nI0814 15:31:09.902379 1741 log.go:172] (0x40005ee280) (5) Data frame handling\nI0814 15:31:09.902601 1741 log.go:172] (0x400058caa0) (3) Data frame sent\nI0814 15:31:09.902807 1741 log.go:172] (0x4000a68d10) Data frame received for 3\nI0814 15:31:09.902936 1741 log.go:172] (0x400058caa0) (3) Data frame handling\nI0814 15:31:09.907608 1741 log.go:172] (0x4000a68d10) Data frame received for 1\nI0814 15:31:09.907742 1741 log.go:172] (0x40005ee1e0) (1) Data frame handling\nI0814 15:31:09.907863 1741 log.go:172] (0x40005ee1e0) (1) Data frame sent\nI0814 15:31:09.908388 1741 log.go:172] (0x4000a68d10) (0x40005ee1e0) Stream removed, broadcasting: 1\nI0814 15:31:09.909405 1741 log.go:172] (0x4000a68d10) Go away received\nI0814 15:31:09.913527 1741 log.go:172] (0x4000a68d10) (0x40005ee1e0) Stream removed, broadcasting: 1\nI0814 15:31:09.913745 1741 log.go:172] (0x4000a68d10) (0x400058caa0) Stream removed, broadcasting: 3\nI0814 15:31:09.913903 1741 log.go:172] (0x4000a68d10) (0x40005ee280) Stream removed, broadcasting: 5\n" Aug 14 15:31:09.925: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 14 15:31:09.925: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 14 15:31:20.010: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 14 15:31:30.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3516 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 14 15:31:31.529: INFO: stderr: "I0814 15:31:31.410405 1763 log.go:172] (0x40008cb810) (0x4000920c80) Create stream\nI0814 15:31:31.413786 1763 log.go:172] (0x40008cb810) (0x4000920c80) Stream added, broadcasting: 1\nI0814 15:31:31.430125 1763 log.go:172] (0x40008cb810) Reply frame received for 1\nI0814 15:31:31.430681 1763 log.go:172] (0x40008cb810) (0x40007a1720) Create stream\nI0814 15:31:31.430732 1763 log.go:172] (0x40008cb810) (0x40007a1720) Stream added, broadcasting: 3\nI0814 15:31:31.431995 1763 log.go:172] (0x40008cb810) Reply frame received for 3\nI0814 15:31:31.432212 1763 log.go:172] (0x40008cb810) (0x4000704b40) Create stream\nI0814 15:31:31.432263 1763 log.go:172] (0x40008cb810) (0x4000704b40) Stream added, broadcasting: 5\nI0814 15:31:31.433334 1763 log.go:172] (0x40008cb810) Reply frame received for 5\nI0814 15:31:31.515053 1763 log.go:172] (0x40008cb810) Data frame received for 5\nI0814 15:31:31.515249 1763 log.go:172] (0x4000704b40) (5) Data frame handling\nI0814 15:31:31.515668 1763 log.go:172] (0x40008cb810) Data frame received for 3\nI0814 15:31:31.515746 1763 log.go:172] (0x40007a1720) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0814 15:31:31.516180 1763 log.go:172] (0x4000704b40) (5) Data frame sent\nI0814 15:31:31.516355 1763 log.go:172] (0x40008cb810) Data frame received for 5\nI0814 15:31:31.516416 1763 log.go:172] (0x4000704b40) (5) Data frame handling\nI0814 15:31:31.516874 1763 log.go:172] (0x40007a1720) (3) Data frame sent\nI0814 15:31:31.516973 1763 log.go:172] (0x40008cb810) Data frame received for 3\nI0814 15:31:31.517045 1763 log.go:172] (0x40008cb810) Data frame received for 1\nI0814 15:31:31.517132 1763 log.go:172] (0x4000920c80) (1) Data frame handling\nI0814 15:31:31.517208 1763 log.go:172] (0x4000920c80) (1) Data frame sent\nI0814 15:31:31.517254 1763 log.go:172] (0x40007a1720) (3) Data frame handling\nI0814 15:31:31.518268 1763 log.go:172] (0x40008cb810) (0x4000920c80) Stream removed, broadcasting: 1\nI0814 15:31:31.519789 1763 log.go:172] (0x40008cb810) Go away received\nI0814 15:31:31.521568 1763 log.go:172] (0x40008cb810) (0x4000920c80) Stream removed, broadcasting: 1\nI0814 15:31:31.521877 1763 log.go:172] (0x40008cb810) (0x40007a1720) Stream removed, broadcasting: 3\nI0814 15:31:31.522184 1763 log.go:172] (0x40008cb810) (0x4000704b40) Stream removed, broadcasting: 5\n" Aug 14 15:31:31.529: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 14 15:31:31.529: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 14 15:31:41.591: INFO: Waiting for StatefulSet statefulset-3516/ss2 to complete update Aug 14 15:31:41.591: INFO: Waiting for Pod statefulset-3516/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 14 15:31:41.591: INFO: Waiting for Pod statefulset-3516/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 14 15:31:51.606: INFO: Waiting for StatefulSet statefulset-3516/ss2 to complete update Aug 14 15:31:51.606: INFO: Waiting for Pod statefulset-3516/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 14 15:31:51.607: INFO: Waiting for Pod statefulset-3516/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 14 15:32:01.757: INFO: Waiting for StatefulSet statefulset-3516/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 14 15:32:11.610: INFO: Deleting all statefulset in ns statefulset-3516 Aug 14 15:32:11.615: INFO: Scaling statefulset ss2 to 0 Aug 14 15:32:31.647: INFO: Waiting for statefulset status.replicas updated to 0 Aug 14 15:32:31.651: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:32:31.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3516" for this suite. • [SLOW TEST:159.888 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":184,"skipped":3260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:32:31.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Aug 14 15:32:31.779: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix270726749/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:32:32.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8591" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":185,"skipped":3311,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:32:32.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:32:40.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3969" for this suite. • [SLOW TEST:7.361 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":186,"skipped":3319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:32:40.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-2147 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2147 STEP: Deleting pre-stop pod Aug 14 15:32:57.330: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:32:57.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2147" for this suite. • [SLOW TEST:17.219 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":187,"skipped":3357,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:32:57.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 14 15:33:02.041: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:33:03.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4949" for this suite. • [SLOW TEST:6.111 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3359,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:33:03.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 14 15:33:04.674: INFO: Waiting up to 5m0s for pod "pod-5fa396eb-1b24-4539-b690-b59848d8f4cd" in namespace "emptydir-5523" to be "Succeeded or Failed" Aug 14 15:33:04.724: INFO: Pod "pod-5fa396eb-1b24-4539-b690-b59848d8f4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 50.348831ms Aug 14 15:33:06.732: INFO: Pod "pod-5fa396eb-1b24-4539-b690-b59848d8f4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058418265s Aug 14 15:33:09.093: INFO: Pod "pod-5fa396eb-1b24-4539-b690-b59848d8f4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.419232699s Aug 14 15:33:11.101: INFO: Pod "pod-5fa396eb-1b24-4539-b690-b59848d8f4cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.426811939s STEP: Saw pod success Aug 14 15:33:11.101: INFO: Pod "pod-5fa396eb-1b24-4539-b690-b59848d8f4cd" satisfied condition "Succeeded or Failed" Aug 14 15:33:11.106: INFO: Trying to get logs from node kali-worker pod pod-5fa396eb-1b24-4539-b690-b59848d8f4cd container test-container: STEP: delete the pod Aug 14 15:33:11.163: INFO: Waiting for pod pod-5fa396eb-1b24-4539-b690-b59848d8f4cd to disappear Aug 14 15:33:11.253: INFO: Pod pod-5fa396eb-1b24-4539-b690-b59848d8f4cd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:33:11.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5523" for this suite. • [SLOW TEST:7.778 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3362,"failed":0} [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:33:11.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Aug 14 15:33:15.462: INFO: Pod pod-hostip-b28ab52b-4e4f-4db6-a0f1-c6de922b9297 has hostIP: 172.18.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:33:15.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7268" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3362,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:33:15.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-3ffbd7b5-7fb9-4751-a639-82222068ded1 STEP: Creating a pod to test consume configMaps Aug 14 15:33:15.591: INFO: Waiting up to 5m0s for pod "pod-configmaps-0661084e-6179-443f-b2e9-026c7dd503bd" in namespace "configmap-3607" to be "Succeeded or Failed" Aug 14 15:33:15.664: INFO: Pod "pod-configmaps-0661084e-6179-443f-b2e9-026c7dd503bd": Phase="Pending", Reason="", readiness=false. Elapsed: 72.402991ms Aug 14 15:33:17.672: INFO: Pod "pod-configmaps-0661084e-6179-443f-b2e9-026c7dd503bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080446341s Aug 14 15:33:19.679: INFO: Pod "pod-configmaps-0661084e-6179-443f-b2e9-026c7dd503bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087737056s Aug 14 15:33:21.970: INFO: Pod "pod-configmaps-0661084e-6179-443f-b2e9-026c7dd503bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.3790069s STEP: Saw pod success Aug 14 15:33:21.971: INFO: Pod "pod-configmaps-0661084e-6179-443f-b2e9-026c7dd503bd" satisfied condition "Succeeded or Failed" Aug 14 15:33:21.976: INFO: Trying to get logs from node kali-worker pod pod-configmaps-0661084e-6179-443f-b2e9-026c7dd503bd container configmap-volume-test: STEP: delete the pod Aug 14 15:33:22.145: INFO: Waiting for pod pod-configmaps-0661084e-6179-443f-b2e9-026c7dd503bd to disappear Aug 14 15:33:22.174: INFO: Pod pod-configmaps-0661084e-6179-443f-b2e9-026c7dd503bd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:33:22.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3607" for this suite. • [SLOW TEST:6.711 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3378,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:33:22.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 14 15:33:28.212: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:33:28.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6151" for this suite. • [SLOW TEST:6.124 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3390,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:33:28.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4721.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4721.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 14 15:33:34.667: INFO: DNS probes using dns-test-b6755ec9-57a9-40f0-8956-01e7a2bf812a succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4721.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4721.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 14 15:33:42.821: INFO: File wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local from pod dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 14 15:33:42.827: INFO: File jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local from pod dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 14 15:33:42.828: INFO: Lookups using dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c failed for: [wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local] Aug 14 15:33:47.835: INFO: File wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local from pod dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 14 15:33:47.841: INFO: File jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local from pod dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 14 15:33:47.841: INFO: Lookups using dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c failed for: [wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local] Aug 14 15:33:52.853: INFO: File wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local from pod dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 14 15:33:52.858: INFO: File jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local from pod dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 14 15:33:52.859: INFO: Lookups using dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c failed for: [wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local] Aug 14 15:33:57.835: INFO: File wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local from pod dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 14 15:33:57.840: INFO: File jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local from pod dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 14 15:33:57.840: INFO: Lookups using dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c failed for: [wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local] Aug 14 15:34:02.833: INFO: File wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local from pod dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 14 15:34:02.836: INFO: File jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local from pod dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 14 15:34:02.836: INFO: Lookups using dns-4721/dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c failed for: [wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local] Aug 14 15:34:07.838: INFO: DNS probes using dns-test-24a288cc-6d56-42ff-95a0-95496ffb617c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4721.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4721.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4721.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4721.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 14 15:34:16.645: INFO: DNS probes using dns-test-95165e36-8857-4489-9027-0e60cfe7e798 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:34:16.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4721" for this suite. • [SLOW TEST:48.733 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":193,"skipped":3400,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:34:17.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:34:17.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3077" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":194,"skipped":3401,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:34:17.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Aug 14 15:34:17.871: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Aug 14 15:34:17.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9407' Aug 14 15:34:20.052: INFO: stderr: "" Aug 14 15:34:20.052: INFO: stdout: "service/agnhost-slave created\n" Aug 14 15:34:20.053: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Aug 14 15:34:20.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9407' Aug 14 15:34:21.641: INFO: stderr: "" Aug 14 15:34:21.641: INFO: stdout: "service/agnhost-master created\n" Aug 14 15:34:21.642: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 14 15:34:21.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9407' Aug 14 15:34:23.356: INFO: stderr: "" Aug 14 15:34:23.356: INFO: stdout: "service/frontend created\n" Aug 14 15:34:23.357: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Aug 14 15:34:23.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9407' Aug 14 15:34:24.922: INFO: stderr: "" Aug 14 15:34:24.922: INFO: stdout: "deployment.apps/frontend created\n" Aug 14 15:34:24.924: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 14 15:34:24.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9407' Aug 14 15:34:26.586: INFO: stderr: "" Aug 14 15:34:26.586: INFO: stdout: "deployment.apps/agnhost-master created\n" Aug 14 15:34:26.587: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 14 15:34:26.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9407' Aug 14 15:34:28.809: INFO: stderr: "" Aug 14 15:34:28.809: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Aug 14 15:34:28.809: INFO: Waiting for all frontend pods to be Running. Aug 14 15:34:33.861: INFO: Waiting for frontend to serve content. Aug 14 15:34:35.311: INFO: Trying to add a new entry to the guestbook. Aug 14 15:34:35.371: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 14 15:34:35.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9407' Aug 14 15:34:36.822: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 15:34:36.822: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Aug 14 15:34:36.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9407' Aug 14 15:34:38.105: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 15:34:38.106: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Aug 14 15:34:38.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9407' Aug 14 15:34:39.321: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 15:34:39.322: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 14 15:34:39.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9407' Aug 14 15:34:40.554: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 15:34:40.554: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 14 15:34:40.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9407' Aug 14 15:34:41.870: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 15:34:41.870: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Aug 14 15:34:41.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9407' Aug 14 15:34:43.677: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 15:34:43.677: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:34:43.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9407" for this suite. • [SLOW TEST:26.129 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":195,"skipped":3411,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:34:43.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 15:34:45.334: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bac9dbc8-e01b-4ae2-8d72-e6367a0ea442" in namespace "downward-api-1944" to be "Succeeded or Failed" Aug 14 15:34:45.518: INFO: Pod "downwardapi-volume-bac9dbc8-e01b-4ae2-8d72-e6367a0ea442": Phase="Pending", Reason="", readiness=false. Elapsed: 182.97267ms Aug 14 15:34:47.926: INFO: Pod "downwardapi-volume-bac9dbc8-e01b-4ae2-8d72-e6367a0ea442": Phase="Pending", Reason="", readiness=false. Elapsed: 2.591664989s Aug 14 15:34:50.357: INFO: Pod "downwardapi-volume-bac9dbc8-e01b-4ae2-8d72-e6367a0ea442": Phase="Pending", Reason="", readiness=false. Elapsed: 5.022079244s Aug 14 15:34:52.361: INFO: Pod "downwardapi-volume-bac9dbc8-e01b-4ae2-8d72-e6367a0ea442": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.026759767s STEP: Saw pod success Aug 14 15:34:52.361: INFO: Pod "downwardapi-volume-bac9dbc8-e01b-4ae2-8d72-e6367a0ea442" satisfied condition "Succeeded or Failed" Aug 14 15:34:52.367: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-bac9dbc8-e01b-4ae2-8d72-e6367a0ea442 container client-container: STEP: delete the pod Aug 14 15:34:52.443: INFO: Waiting for pod downwardapi-volume-bac9dbc8-e01b-4ae2-8d72-e6367a0ea442 to disappear Aug 14 15:34:52.487: INFO: Pod downwardapi-volume-bac9dbc8-e01b-4ae2-8d72-e6367a0ea442 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:34:52.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1944" for this suite. • [SLOW TEST:8.756 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3413,"failed":0} [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:34:52.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-4690/configmap-test-a28aa44b-a3f7-4dd8-ac74-5f0e41135a45 STEP: Creating a pod to test consume configMaps Aug 14 15:34:52.584: INFO: Waiting up to 5m0s for pod "pod-configmaps-45d63ef9-4faa-40e5-be1d-7d2ccfcc3dec" in namespace "configmap-4690" to be "Succeeded or Failed" Aug 14 15:34:52.643: INFO: Pod "pod-configmaps-45d63ef9-4faa-40e5-be1d-7d2ccfcc3dec": Phase="Pending", Reason="", readiness=false. Elapsed: 59.262451ms Aug 14 15:34:54.647: INFO: Pod "pod-configmaps-45d63ef9-4faa-40e5-be1d-7d2ccfcc3dec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063363806s Aug 14 15:34:56.652: INFO: Pod "pod-configmaps-45d63ef9-4faa-40e5-be1d-7d2ccfcc3dec": Phase="Running", Reason="", readiness=true. Elapsed: 4.067946063s Aug 14 15:34:58.658: INFO: Pod "pod-configmaps-45d63ef9-4faa-40e5-be1d-7d2ccfcc3dec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074176964s STEP: Saw pod success Aug 14 15:34:58.658: INFO: Pod "pod-configmaps-45d63ef9-4faa-40e5-be1d-7d2ccfcc3dec" satisfied condition "Succeeded or Failed" Aug 14 15:34:58.661: INFO: Trying to get logs from node kali-worker pod pod-configmaps-45d63ef9-4faa-40e5-be1d-7d2ccfcc3dec container env-test: STEP: delete the pod Aug 14 15:34:58.684: INFO: Waiting for pod pod-configmaps-45d63ef9-4faa-40e5-be1d-7d2ccfcc3dec to disappear Aug 14 15:34:58.688: INFO: Pod pod-configmaps-45d63ef9-4faa-40e5-be1d-7d2ccfcc3dec no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:34:58.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4690" for this suite. • [SLOW TEST:6.200 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3413,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:34:58.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-871193ef-64eb-4aea-9a30-9124c3490c9c STEP: Creating a pod to test consume configMaps Aug 14 15:34:58.959: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f5c42c0d-561c-4392-91b3-0f92c7b41fd3" in namespace "projected-8975" to be "Succeeded or Failed" Aug 14 15:34:59.040: INFO: Pod "pod-projected-configmaps-f5c42c0d-561c-4392-91b3-0f92c7b41fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 80.554502ms Aug 14 15:35:01.046: INFO: Pod "pod-projected-configmaps-f5c42c0d-561c-4392-91b3-0f92c7b41fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087212447s Aug 14 15:35:03.053: INFO: Pod "pod-projected-configmaps-f5c42c0d-561c-4392-91b3-0f92c7b41fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093827782s STEP: Saw pod success Aug 14 15:35:03.053: INFO: Pod "pod-projected-configmaps-f5c42c0d-561c-4392-91b3-0f92c7b41fd3" satisfied condition "Succeeded or Failed" Aug 14 15:35:03.058: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-f5c42c0d-561c-4392-91b3-0f92c7b41fd3 container projected-configmap-volume-test: STEP: delete the pod Aug 14 15:35:03.368: INFO: Waiting for pod pod-projected-configmaps-f5c42c0d-561c-4392-91b3-0f92c7b41fd3 to disappear Aug 14 15:35:03.371: INFO: Pod pod-projected-configmaps-f5c42c0d-561c-4392-91b3-0f92c7b41fd3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:35:03.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8975" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3430,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:35:03.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 14 15:35:07.756: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:35:07.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-14" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3436,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:35:07.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8180.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8180.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8180.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8180.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8180.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8180.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 14 15:35:16.056: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:16.060: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:16.063: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:16.066: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:16.077: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:16.081: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:16.085: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:16.089: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:16.097: INFO: Lookups using dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local] Aug 14 15:35:21.104: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:21.109: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:21.113: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:21.117: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:21.131: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:21.136: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:21.142: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:21.145: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:21.150: INFO: Lookups using dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local] Aug 14 15:35:26.103: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:26.107: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:26.110: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:26.113: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:26.121: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:26.124: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:26.128: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:26.131: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:26.137: INFO: Lookups using dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local] Aug 14 15:35:31.105: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:31.110: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:31.115: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:31.119: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:31.132: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:31.136: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:31.141: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:31.145: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:31.155: INFO: Lookups using dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local] Aug 14 15:35:36.102: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:36.105: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:36.108: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:36.111: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:36.119: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:36.121: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:36.124: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:36.127: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:36.135: INFO: Lookups using dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local] Aug 14 15:35:41.103: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:41.107: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:41.111: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:41.115: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:41.125: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:41.128: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:41.130: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:41.132: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local from pod dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739: the server could not find the requested resource (get pods dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739) Aug 14 15:35:41.137: INFO: Lookups using dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8180.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8180.svc.cluster.local jessie_udp@dns-test-service-2.dns-8180.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8180.svc.cluster.local] Aug 14 15:35:46.135: INFO: DNS probes using dns-8180/dns-test-de9bdf19-2430-4a2a-b8a6-58ceca36d739 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:35:46.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8180" for this suite. • [SLOW TEST:39.123 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":200,"skipped":3445,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:35:46.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-c05eab40-c98c-4d98-81a6-71ad0d7b84f7 STEP: Creating a pod to test consume secrets Aug 14 15:35:47.054: INFO: Waiting up to 5m0s for pod "pod-secrets-6c4c2ba8-267d-4a01-bbd1-ac006984072d" in namespace "secrets-4128" to be "Succeeded or Failed" Aug 14 15:35:47.075: INFO: Pod "pod-secrets-6c4c2ba8-267d-4a01-bbd1-ac006984072d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.378277ms Aug 14 15:35:49.152: INFO: Pod "pod-secrets-6c4c2ba8-267d-4a01-bbd1-ac006984072d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098550001s Aug 14 15:35:51.159: INFO: Pod "pod-secrets-6c4c2ba8-267d-4a01-bbd1-ac006984072d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104799447s STEP: Saw pod success Aug 14 15:35:51.159: INFO: Pod "pod-secrets-6c4c2ba8-267d-4a01-bbd1-ac006984072d" satisfied condition "Succeeded or Failed" Aug 14 15:35:51.163: INFO: Trying to get logs from node kali-worker pod pod-secrets-6c4c2ba8-267d-4a01-bbd1-ac006984072d container secret-env-test: STEP: delete the pod Aug 14 15:35:51.513: INFO: Waiting for pod pod-secrets-6c4c2ba8-267d-4a01-bbd1-ac006984072d to disappear Aug 14 15:35:51.521: INFO: Pod pod-secrets-6c4c2ba8-267d-4a01-bbd1-ac006984072d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:35:51.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4128" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3447,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:35:51.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:35:55.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6320" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":202,"skipped":3463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:35:55.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 15:35:56.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5c0eb99-740e-4271-8482-88b26ad564ef" in namespace "downward-api-4934" to be "Succeeded or Failed" Aug 14 15:35:56.525: INFO: Pod "downwardapi-volume-d5c0eb99-740e-4271-8482-88b26ad564ef": Phase="Pending", Reason="", readiness=false. Elapsed: 238.225217ms Aug 14 15:35:58.531: INFO: Pod "downwardapi-volume-d5c0eb99-740e-4271-8482-88b26ad564ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24358489s Aug 14 15:36:00.537: INFO: Pod "downwardapi-volume-d5c0eb99-740e-4271-8482-88b26ad564ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2502262s STEP: Saw pod success Aug 14 15:36:00.538: INFO: Pod "downwardapi-volume-d5c0eb99-740e-4271-8482-88b26ad564ef" satisfied condition "Succeeded or Failed" Aug 14 15:36:00.542: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d5c0eb99-740e-4271-8482-88b26ad564ef container client-container: STEP: delete the pod Aug 14 15:36:00.610: INFO: Waiting for pod downwardapi-volume-d5c0eb99-740e-4271-8482-88b26ad564ef to disappear Aug 14 15:36:00.618: INFO: Pod downwardapi-volume-d5c0eb99-740e-4271-8482-88b26ad564ef no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:36:00.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4934" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:36:00.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Aug 14 15:36:00.751: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:36:07.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1325" for this suite. • [SLOW TEST:6.441 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":204,"skipped":3547,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:36:07.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 14 15:36:07.199: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e56afcc6-32cb-43bf-97b2-e8f46215d807" in namespace "downward-api-1093" to be "Succeeded or Failed" Aug 14 15:36:07.217: INFO: Pod "downwardapi-volume-e56afcc6-32cb-43bf-97b2-e8f46215d807": Phase="Pending", Reason="", readiness=false. Elapsed: 17.603264ms Aug 14 15:36:09.285: INFO: Pod "downwardapi-volume-e56afcc6-32cb-43bf-97b2-e8f46215d807": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085967594s Aug 14 15:36:11.405: INFO: Pod "downwardapi-volume-e56afcc6-32cb-43bf-97b2-e8f46215d807": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.206105898s STEP: Saw pod success Aug 14 15:36:11.405: INFO: Pod "downwardapi-volume-e56afcc6-32cb-43bf-97b2-e8f46215d807" satisfied condition "Succeeded or Failed" Aug 14 15:36:11.409: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e56afcc6-32cb-43bf-97b2-e8f46215d807 container client-container: STEP: delete the pod Aug 14 15:36:11.435: INFO: Waiting for pod downwardapi-volume-e56afcc6-32cb-43bf-97b2-e8f46215d807 to disappear Aug 14 15:36:11.438: INFO: Pod downwardapi-volume-e56afcc6-32cb-43bf-97b2-e8f46215d807 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:36:11.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1093" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:36:11.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-3a01dd0d-0fe0-4c6b-bf12-5d7740eef372 STEP: Creating a pod to test consume secrets Aug 14 15:36:11.560: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-22e32588-66fa-4522-a674-3a1d3ad56622" in namespace "projected-2251" to be "Succeeded or Failed" Aug 14 15:36:11.565: INFO: Pod "pod-projected-secrets-22e32588-66fa-4522-a674-3a1d3ad56622": Phase="Pending", Reason="", readiness=false. Elapsed: 4.73555ms Aug 14 15:36:13.609: INFO: Pod "pod-projected-secrets-22e32588-66fa-4522-a674-3a1d3ad56622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048679255s Aug 14 15:36:15.614: INFO: Pod "pod-projected-secrets-22e32588-66fa-4522-a674-3a1d3ad56622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053415224s STEP: Saw pod success Aug 14 15:36:15.614: INFO: Pod "pod-projected-secrets-22e32588-66fa-4522-a674-3a1d3ad56622" satisfied condition "Succeeded or Failed" Aug 14 15:36:15.617: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-22e32588-66fa-4522-a674-3a1d3ad56622 container projected-secret-volume-test: STEP: delete the pod Aug 14 15:36:15.818: INFO: Waiting for pod pod-projected-secrets-22e32588-66fa-4522-a674-3a1d3ad56622 to disappear Aug 14 15:36:15.857: INFO: Pod pod-projected-secrets-22e32588-66fa-4522-a674-3a1d3ad56622 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:36:15.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2251" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3600,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:36:15.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:36:16.090: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:36:16.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8580" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":207,"skipped":3622,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:36:16.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-a5a68b4f-7c0d-4258-a2ec-ced45300eec7 in namespace container-probe-1960 Aug 14 15:36:20.950: INFO: Started pod liveness-a5a68b4f-7c0d-4258-a2ec-ced45300eec7 in namespace container-probe-1960 STEP: checking the pod's current state and verifying that restartCount is present Aug 14 15:36:20.954: INFO: Initial restart count of pod liveness-a5a68b4f-7c0d-4258-a2ec-ced45300eec7 is 0 Aug 14 15:36:33.003: INFO: Restart count of pod container-probe-1960/liveness-a5a68b4f-7c0d-4258-a2ec-ced45300eec7 is now 1 (12.048924854s elapsed) Aug 14 15:36:53.184: INFO: Restart count of pod container-probe-1960/liveness-a5a68b4f-7c0d-4258-a2ec-ced45300eec7 is now 2 (32.229436725s elapsed) Aug 14 15:37:13.696: INFO: Restart count of pod container-probe-1960/liveness-a5a68b4f-7c0d-4258-a2ec-ced45300eec7 is now 3 (52.742000345s elapsed) Aug 14 15:37:33.972: INFO: Restart count of pod container-probe-1960/liveness-a5a68b4f-7c0d-4258-a2ec-ced45300eec7 is now 4 (1m13.017211484s elapsed) Aug 14 15:38:42.349: INFO: Restart count of pod container-probe-1960/liveness-a5a68b4f-7c0d-4258-a2ec-ced45300eec7 is now 5 (2m21.3942916s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:38:42.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1960" for this suite. • [SLOW TEST:145.701 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3634,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:38:42.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:38:43.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4395" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":209,"skipped":3657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:38:43.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0814 15:39:24.764942 10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 14 15:39:24.765: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:39:24.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6900" for this suite. • [SLOW TEST:41.091 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":210,"skipped":3689,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:39:24.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:39:31.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1998" for this suite. • [SLOW TEST:7.105 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:39:31.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Aug 14 15:39:32.986: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Aug 14 15:40:52.094: INFO: >>> kubeConfig: /root/.kube/config Aug 14 15:41:02.280: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:42:12.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6967" for this suite. • [SLOW TEST:160.208 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":212,"skipped":3722,"failed":0} [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:42:12.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:42:12.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2809" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":213,"skipped":3722,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:42:12.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Aug 14 15:42:12.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-102' Aug 14 15:42:17.110: INFO: stderr: "" Aug 14 15:42:17.110: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 14 15:42:17.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-102' Aug 14 15:42:18.393: INFO: stderr: "" Aug 14 15:42:18.393: INFO: stdout: "update-demo-nautilus-frgjj update-demo-nautilus-g78xp " Aug 14 15:42:18.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-frgjj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-102' Aug 14 15:42:19.622: INFO: stderr: "" Aug 14 15:42:19.623: INFO: stdout: "" Aug 14 15:42:19.623: INFO: update-demo-nautilus-frgjj is created but not running Aug 14 15:42:24.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-102' Aug 14 15:42:25.887: INFO: stderr: "" Aug 14 15:42:25.887: INFO: stdout: "update-demo-nautilus-frgjj update-demo-nautilus-g78xp " Aug 14 15:42:25.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-frgjj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-102' Aug 14 15:42:27.100: INFO: stderr: "" Aug 14 15:42:27.100: INFO: stdout: "true" Aug 14 15:42:27.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-frgjj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-102' Aug 14 15:42:28.346: INFO: stderr: "" Aug 14 15:42:28.346: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 14 15:42:28.346: INFO: validating pod update-demo-nautilus-frgjj Aug 14 15:42:28.352: INFO: got data: { "image": "nautilus.jpg" } Aug 14 15:42:28.353: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 14 15:42:28.353: INFO: update-demo-nautilus-frgjj is verified up and running Aug 14 15:42:28.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g78xp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-102' Aug 14 15:42:29.574: INFO: stderr: "" Aug 14 15:42:29.574: INFO: stdout: "true" Aug 14 15:42:29.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g78xp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-102' Aug 14 15:42:30.822: INFO: stderr: "" Aug 14 15:42:30.822: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 14 15:42:30.823: INFO: validating pod update-demo-nautilus-g78xp Aug 14 15:42:30.834: INFO: got data: { "image": "nautilus.jpg" } Aug 14 15:42:30.835: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 14 15:42:30.835: INFO: update-demo-nautilus-g78xp is verified up and running STEP: scaling down the replication controller Aug 14 15:42:30.847: INFO: scanned /root for discovery docs: Aug 14 15:42:30.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-102' Aug 14 15:42:33.273: INFO: stderr: "" Aug 14 15:42:33.273: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 14 15:42:33.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-102' Aug 14 15:42:34.630: INFO: stderr: "" Aug 14 15:42:34.630: INFO: stdout: "update-demo-nautilus-frgjj update-demo-nautilus-g78xp " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 14 15:42:39.632: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-102' Aug 14 15:42:40.967: INFO: stderr: "" Aug 14 15:42:40.967: INFO: stdout: "update-demo-nautilus-frgjj update-demo-nautilus-g78xp " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 14 15:42:45.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-102' Aug 14 15:42:47.240: INFO: stderr: "" Aug 14 15:42:47.240: INFO: stdout: "update-demo-nautilus-g78xp " Aug 14 15:42:47.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g78xp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-102' Aug 14 15:42:48.510: INFO: stderr: "" Aug 14 15:42:48.510: INFO: stdout: "true" Aug 14 15:42:48.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g78xp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-102' Aug 14 15:42:49.773: INFO: stderr: "" Aug 14 15:42:49.773: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 14 15:42:49.773: INFO: validating pod update-demo-nautilus-g78xp Aug 14 15:42:49.779: INFO: got data: { "image": "nautilus.jpg" } Aug 14 15:42:49.779: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 14 15:42:49.779: INFO: update-demo-nautilus-g78xp is verified up and running STEP: scaling up the replication controller Aug 14 15:42:49.787: INFO: scanned /root for discovery docs: Aug 14 15:42:49.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-102' Aug 14 15:42:51.109: INFO: stderr: "" Aug 14 15:42:51.109: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 14 15:42:51.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-102' Aug 14 15:42:52.420: INFO: stderr: "" Aug 14 15:42:52.420: INFO: stdout: "update-demo-nautilus-g78xp update-demo-nautilus-p5p9m " Aug 14 15:42:52.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g78xp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-102' Aug 14 15:42:53.695: INFO: stderr: "" Aug 14 15:42:53.695: INFO: stdout: "true" Aug 14 15:42:53.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g78xp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-102' Aug 14 15:42:54.954: INFO: stderr: "" Aug 14 15:42:54.955: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 14 15:42:54.955: INFO: validating pod update-demo-nautilus-g78xp Aug 14 15:42:54.960: INFO: got data: { "image": "nautilus.jpg" } Aug 14 15:42:54.960: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 14 15:42:54.960: INFO: update-demo-nautilus-g78xp is verified up and running Aug 14 15:42:54.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p5p9m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-102' Aug 14 15:42:56.215: INFO: stderr: "" Aug 14 15:42:56.216: INFO: stdout: "true" Aug 14 15:42:56.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p5p9m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-102' Aug 14 15:42:57.484: INFO: stderr: "" Aug 14 15:42:57.484: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 14 15:42:57.484: INFO: validating pod update-demo-nautilus-p5p9m Aug 14 15:42:57.489: INFO: got data: { "image": "nautilus.jpg" } Aug 14 15:42:57.489: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 14 15:42:57.489: INFO: update-demo-nautilus-p5p9m is verified up and running STEP: using delete to clean up resources Aug 14 15:42:57.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-102' Aug 14 15:42:58.695: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 14 15:42:58.695: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 14 15:42:58.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-102' Aug 14 15:42:59.933: INFO: stderr: "No resources found in kubectl-102 namespace.\n" Aug 14 15:42:59.934: INFO: stdout: "" Aug 14 15:42:59.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-102 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 14 15:43:01.213: INFO: stderr: "" Aug 14 15:43:01.217: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:43:01.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-102" for this suite. • [SLOW TEST:49.009 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":214,"skipped":3744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:43:01.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:43:16.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8910" for this suite. STEP: Destroying namespace "nsdeletetest-2178" for this suite. Aug 14 15:43:16.651: INFO: Namespace nsdeletetest-2178 was already deleted STEP: Destroying namespace "nsdeletetest-1351" for this suite. • [SLOW TEST:15.379 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":215,"skipped":3782,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:43:16.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-83603ac4-97c3-47b7-aa3c-70aa15dee871 STEP: Creating configMap with name cm-test-opt-upd-3ddfc3b5-c962-430b-a4ff-f2b24882d6d7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-83603ac4-97c3-47b7-aa3c-70aa15dee871 STEP: Updating configmap cm-test-opt-upd-3ddfc3b5-c962-430b-a4ff-f2b24882d6d7 STEP: Creating configMap with name cm-test-opt-create-0e346b05-97c7-4c32-a379-c9752f36caf2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:44:51.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8371" for this suite. • [SLOW TEST:94.725 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3792,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:44:51.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 14 15:44:51.470: INFO: Waiting up to 5m0s for pod "downward-api-4c5d9608-b199-418e-9128-5aa239594173" in namespace "downward-api-4793" to be "Succeeded or Failed" Aug 14 15:44:51.521: INFO: Pod "downward-api-4c5d9608-b199-418e-9128-5aa239594173": Phase="Pending", Reason="", readiness=false. Elapsed: 50.229076ms Aug 14 15:44:53.609: INFO: Pod "downward-api-4c5d9608-b199-418e-9128-5aa239594173": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138113966s Aug 14 15:44:55.614: INFO: Pod "downward-api-4c5d9608-b199-418e-9128-5aa239594173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143871651s STEP: Saw pod success Aug 14 15:44:55.614: INFO: Pod "downward-api-4c5d9608-b199-418e-9128-5aa239594173" satisfied condition "Succeeded or Failed" Aug 14 15:44:55.619: INFO: Trying to get logs from node kali-worker2 pod downward-api-4c5d9608-b199-418e-9128-5aa239594173 container dapi-container: STEP: delete the pod Aug 14 15:44:55.807: INFO: Waiting for pod downward-api-4c5d9608-b199-418e-9128-5aa239594173 to disappear Aug 14 15:44:55.811: INFO: Pod downward-api-4c5d9608-b199-418e-9128-5aa239594173 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:44:55.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4793" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3804,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:44:55.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 14 15:44:58.822: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 14 15:45:01.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016698, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016698, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016698, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016698, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 14 15:45:03.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016698, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016698, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016698, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016698, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 15:45:06.789: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:45:06.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:45:08.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7852" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.977 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":218,"skipped":3804,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:45:09.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-rrmb STEP: Creating a pod to test atomic-volume-subpath Aug 14 15:45:10.515: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rrmb" in namespace "subpath-8547" to be "Succeeded or Failed" Aug 14 15:45:10.639: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Pending", Reason="", readiness=false. Elapsed: 123.615677ms Aug 14 15:45:12.647: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131990596s Aug 14 15:45:14.654: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139286247s Aug 14 15:45:16.818: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Running", Reason="", readiness=true. Elapsed: 6.302531577s Aug 14 15:45:18.843: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Running", Reason="", readiness=true. Elapsed: 8.327780128s Aug 14 15:45:21.088: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Running", Reason="", readiness=true. Elapsed: 10.572588316s Aug 14 15:45:23.094: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Running", Reason="", readiness=true. Elapsed: 12.578551897s Aug 14 15:45:25.101: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Running", Reason="", readiness=true. Elapsed: 14.585661163s Aug 14 15:45:27.155: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Running", Reason="", readiness=true. Elapsed: 16.639524665s Aug 14 15:45:29.243: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Running", Reason="", readiness=true. Elapsed: 18.728133308s Aug 14 15:45:31.251: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Running", Reason="", readiness=true. Elapsed: 20.735662261s Aug 14 15:45:33.258: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Running", Reason="", readiness=true. Elapsed: 22.742692042s Aug 14 15:45:35.265: INFO: Pod "pod-subpath-test-downwardapi-rrmb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.749950963s STEP: Saw pod success Aug 14 15:45:35.265: INFO: Pod "pod-subpath-test-downwardapi-rrmb" satisfied condition "Succeeded or Failed" Aug 14 15:45:35.270: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-rrmb container test-container-subpath-downwardapi-rrmb: STEP: delete the pod Aug 14 15:45:35.676: INFO: Waiting for pod pod-subpath-test-downwardapi-rrmb to disappear Aug 14 15:45:35.690: INFO: Pod pod-subpath-test-downwardapi-rrmb no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rrmb Aug 14 15:45:35.691: INFO: Deleting pod "pod-subpath-test-downwardapi-rrmb" in namespace "subpath-8547" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:45:35.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8547" for this suite. • [SLOW TEST:25.906 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":219,"skipped":3828,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:45:35.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-06aef354-1b7f-4ba7-8daf-7dfbb0d43c79 STEP: Creating secret with name secret-projected-all-test-volume-2486ee39-0c9c-4efe-adca-e40fede00528 STEP: Creating a pod to test Check all projections for projected volume plugin Aug 14 15:45:36.898: INFO: Waiting up to 5m0s for pod "projected-volume-4e58ed59-58f3-496d-8e0c-2e8582e0b055" in namespace "projected-4325" to be "Succeeded or Failed" Aug 14 15:45:36.965: INFO: Pod "projected-volume-4e58ed59-58f3-496d-8e0c-2e8582e0b055": Phase="Pending", Reason="", readiness=false. Elapsed: 66.151739ms Aug 14 15:45:38.970: INFO: Pod "projected-volume-4e58ed59-58f3-496d-8e0c-2e8582e0b055": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071430689s Aug 14 15:45:40.975: INFO: Pod "projected-volume-4e58ed59-58f3-496d-8e0c-2e8582e0b055": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075994994s STEP: Saw pod success Aug 14 15:45:40.975: INFO: Pod "projected-volume-4e58ed59-58f3-496d-8e0c-2e8582e0b055" satisfied condition "Succeeded or Failed" Aug 14 15:45:40.978: INFO: Trying to get logs from node kali-worker pod projected-volume-4e58ed59-58f3-496d-8e0c-2e8582e0b055 container projected-all-volume-test: STEP: delete the pod Aug 14 15:45:41.639: INFO: Waiting for pod projected-volume-4e58ed59-58f3-496d-8e0c-2e8582e0b055 to disappear Aug 14 15:45:41.642: INFO: Pod projected-volume-4e58ed59-58f3-496d-8e0c-2e8582e0b055 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:45:41.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4325" for this suite. • [SLOW TEST:6.029 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3835,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:45:41.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Aug 14 15:45:41.942: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1807" to be "Succeeded or Failed" Aug 14 15:45:42.017: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 73.9417ms Aug 14 15:45:44.023: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080698534s Aug 14 15:45:46.454: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.511352818s Aug 14 15:45:48.627: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.684436071s Aug 14 15:45:50.633: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.690515524s Aug 14 15:45:52.639: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.696689543s Aug 14 15:45:55.154: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.211374102s STEP: Saw pod success Aug 14 15:45:55.154: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Aug 14 15:45:55.364: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 14 15:45:55.617: INFO: Waiting for pod pod-host-path-test to disappear Aug 14 15:45:55.656: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:45:55.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1807" for this suite. • [SLOW TEST:13.928 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3848,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:45:55.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:45:56.159: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 14 15:46:16.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4688 create -f -' Aug 14 15:46:20.871: INFO: stderr: "" Aug 14 15:46:20.871: INFO: stdout: "e2e-test-crd-publish-openapi-4041-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 14 15:46:20.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4688 delete e2e-test-crd-publish-openapi-4041-crds test-cr' Aug 14 15:46:22.101: INFO: stderr: "" Aug 14 15:46:22.102: INFO: stdout: "e2e-test-crd-publish-openapi-4041-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Aug 14 15:46:22.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4688 apply -f -' Aug 14 15:46:23.801: INFO: stderr: "" Aug 14 15:46:23.802: INFO: stdout: "e2e-test-crd-publish-openapi-4041-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 14 15:46:23.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4688 delete e2e-test-crd-publish-openapi-4041-crds test-cr' Aug 14 15:46:25.117: INFO: stderr: "" Aug 14 15:46:25.117: INFO: stdout: "e2e-test-crd-publish-openapi-4041-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 14 15:46:25.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4041-crds' Aug 14 15:46:27.042: INFO: stderr: "" Aug 14 15:46:27.042: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4041-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:46:37.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4688" for this suite. • [SLOW TEST:41.839 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":222,"skipped":3855,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:46:37.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Aug 14 15:46:44.166: INFO: Successfully updated pod "labelsupdate222a97c0-4fb0-400d-8a22-6f0f4f3f6701" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:46:46.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9657" for this suite. • [SLOW TEST:9.169 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3887,"failed":0} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:46:46.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Aug 14 15:46:46.849: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:46:55.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2967" for this suite. • [SLOW TEST:8.702 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":224,"skipped":3887,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:46:55.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Aug 14 15:46:55.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2267' Aug 14 15:46:58.332: INFO: stderr: "" Aug 14 15:46:58.332: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 14 15:46:59.341: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 15:46:59.342: INFO: Found 0 / 1 Aug 14 15:47:00.485: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 15:47:00.486: INFO: Found 0 / 1 Aug 14 15:47:01.504: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 15:47:01.504: INFO: Found 0 / 1 Aug 14 15:47:02.341: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 15:47:02.341: INFO: Found 0 / 1 Aug 14 15:47:03.342: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 15:47:03.342: INFO: Found 1 / 1 Aug 14 15:47:03.342: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 14 15:47:03.349: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 15:47:03.349: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 14 15:47:03.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config patch pod agnhost-master-6xz6p --namespace=kubectl-2267 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 14 15:47:04.564: INFO: stderr: "" Aug 14 15:47:04.564: INFO: stdout: "pod/agnhost-master-6xz6p patched\n" STEP: checking annotations Aug 14 15:47:04.593: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 15:47:04.593: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:47:04.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2267" for this suite. • [SLOW TEST:9.225 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":225,"skipped":3892,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:47:04.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 14 15:47:04.844: INFO: Waiting up to 5m0s for pod "pod-34d4ce4d-e487-4ca4-b597-1809ab8fe2fc" in namespace "emptydir-7088" to be "Succeeded or Failed" Aug 14 15:47:04.870: INFO: Pod "pod-34d4ce4d-e487-4ca4-b597-1809ab8fe2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 25.058582ms Aug 14 15:47:06.875: INFO: Pod "pod-34d4ce4d-e487-4ca4-b597-1809ab8fe2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030754353s Aug 14 15:47:09.010: INFO: Pod "pod-34d4ce4d-e487-4ca4-b597-1809ab8fe2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166022296s Aug 14 15:47:11.017: INFO: Pod "pod-34d4ce4d-e487-4ca4-b597-1809ab8fe2fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.172062377s STEP: Saw pod success Aug 14 15:47:11.017: INFO: Pod "pod-34d4ce4d-e487-4ca4-b597-1809ab8fe2fc" satisfied condition "Succeeded or Failed" Aug 14 15:47:11.022: INFO: Trying to get logs from node kali-worker pod pod-34d4ce4d-e487-4ca4-b597-1809ab8fe2fc container test-container: STEP: delete the pod Aug 14 15:47:11.262: INFO: Waiting for pod pod-34d4ce4d-e487-4ca4-b597-1809ab8fe2fc to disappear Aug 14 15:47:11.309: INFO: Pod pod-34d4ce4d-e487-4ca4-b597-1809ab8fe2fc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:47:11.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7088" for this suite. • [SLOW TEST:6.842 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3892,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:47:11.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1647.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1647.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1647.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1647.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1647.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1647.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 14 15:47:21.775: INFO: DNS probes using dns-1647/dns-test-638f6263-cb9b-4423-bb30-e1c3db019e50 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:47:22.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1647" for this suite. • [SLOW TEST:11.462 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":227,"skipped":3910,"failed":0} SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:47:22.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 14 15:47:30.239: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:47:31.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3435" for this suite. • [SLOW TEST:8.456 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":228,"skipped":3912,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:47:31.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:47:37.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-147" for this suite. • [SLOW TEST:6.309 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3920,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:47:37.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 14 15:47:38.164: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:38.239: INFO: Number of nodes with available pods: 0 Aug 14 15:47:38.239: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:47:39.253: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:39.262: INFO: Number of nodes with available pods: 0 Aug 14 15:47:39.262: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:47:40.251: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:40.258: INFO: Number of nodes with available pods: 0 Aug 14 15:47:40.258: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:47:41.253: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:41.259: INFO: Number of nodes with available pods: 0 Aug 14 15:47:41.259: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:47:42.335: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:42.428: INFO: Number of nodes with available pods: 0 Aug 14 15:47:42.428: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:47:43.254: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:43.530: INFO: Number of nodes with available pods: 0 Aug 14 15:47:43.530: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:47:44.314: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:44.362: INFO: Number of nodes with available pods: 0 Aug 14 15:47:44.362: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:47:45.250: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:45.254: INFO: Number of nodes with available pods: 1 Aug 14 15:47:45.254: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:47:46.250: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:46.257: INFO: Number of nodes with available pods: 2 Aug 14 15:47:46.257: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 14 15:47:46.322: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:46.390: INFO: Number of nodes with available pods: 1 Aug 14 15:47:46.390: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:47:47.403: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:47.409: INFO: Number of nodes with available pods: 1 Aug 14 15:47:47.409: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:47:48.413: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:48.420: INFO: Number of nodes with available pods: 1 Aug 14 15:47:48.420: INFO: Node kali-worker is running more than one daemon pod Aug 14 15:47:49.439: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 14 15:47:49.477: INFO: Number of nodes with available pods: 2 Aug 14 15:47:49.477: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6525, will wait for the garbage collector to delete the pods Aug 14 15:47:49.569: INFO: Deleting DaemonSet.extensions daemon-set took: 7.522447ms Aug 14 15:47:49.670: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.659943ms Aug 14 15:48:03.519: INFO: Number of nodes with available pods: 0 Aug 14 15:48:03.519: INFO: Number of running nodes: 0, number of available pods: 0 Aug 14 15:48:03.533: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6525/daemonsets","resourceVersion":"9563003"},"items":null} Aug 14 15:48:03.537: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6525/pods","resourceVersion":"9563003"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:48:03.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6525" for this suite. • [SLOW TEST:25.887 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":230,"skipped":3922,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:48:03.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 14 15:48:03.680: INFO: Waiting up to 5m0s for pod "pod-b9afcf47-11a5-464c-ad74-158b3f6d400c" in namespace "emptydir-6419" to be "Succeeded or Failed" Aug 14 15:48:03.689: INFO: Pod "pod-b9afcf47-11a5-464c-ad74-158b3f6d400c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.271523ms Aug 14 15:48:05.705: INFO: Pod "pod-b9afcf47-11a5-464c-ad74-158b3f6d400c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024782117s Aug 14 15:48:07.711: INFO: Pod "pod-b9afcf47-11a5-464c-ad74-158b3f6d400c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030600064s STEP: Saw pod success Aug 14 15:48:07.711: INFO: Pod "pod-b9afcf47-11a5-464c-ad74-158b3f6d400c" satisfied condition "Succeeded or Failed" Aug 14 15:48:07.715: INFO: Trying to get logs from node kali-worker pod pod-b9afcf47-11a5-464c-ad74-158b3f6d400c container test-container: STEP: delete the pod Aug 14 15:48:08.032: INFO: Waiting for pod pod-b9afcf47-11a5-464c-ad74-158b3f6d400c to disappear Aug 14 15:48:08.047: INFO: Pod pod-b9afcf47-11a5-464c-ad74-158b3f6d400c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:48:08.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6419" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3925,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:48:08.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-4b8ca7bf-6397-4d58-8dcf-57dbc8cc6bb0 STEP: Creating a pod to test consume secrets Aug 14 15:48:08.157: INFO: Waiting up to 5m0s for pod "pod-secrets-f7637b5a-a764-48fb-8848-0694463263e8" in namespace "secrets-6818" to be "Succeeded or Failed" Aug 14 15:48:08.161: INFO: Pod "pod-secrets-f7637b5a-a764-48fb-8848-0694463263e8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.920349ms Aug 14 15:48:10.275: INFO: Pod "pod-secrets-f7637b5a-a764-48fb-8848-0694463263e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118436738s Aug 14 15:48:12.282: INFO: Pod "pod-secrets-f7637b5a-a764-48fb-8848-0694463263e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125116334s Aug 14 15:48:14.323: INFO: Pod "pod-secrets-f7637b5a-a764-48fb-8848-0694463263e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.165873521s STEP: Saw pod success Aug 14 15:48:14.323: INFO: Pod "pod-secrets-f7637b5a-a764-48fb-8848-0694463263e8" satisfied condition "Succeeded or Failed" Aug 14 15:48:14.341: INFO: Trying to get logs from node kali-worker pod pod-secrets-f7637b5a-a764-48fb-8848-0694463263e8 container secret-volume-test: STEP: delete the pod Aug 14 15:48:14.482: INFO: Waiting for pod pod-secrets-f7637b5a-a764-48fb-8848-0694463263e8 to disappear Aug 14 15:48:14.489: INFO: Pod pod-secrets-f7637b5a-a764-48fb-8848-0694463263e8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:48:14.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6818" for this suite. • [SLOW TEST:6.441 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3937,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:48:14.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:48:14.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-176' Aug 14 15:48:16.190: INFO: stderr: "" Aug 14 15:48:16.190: INFO: stdout: "replicationcontroller/agnhost-master created\n" Aug 14 15:48:16.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-176' Aug 14 15:48:18.146: INFO: stderr: "" Aug 14 15:48:18.146: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 14 15:48:19.162: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 15:48:19.162: INFO: Found 0 / 1 Aug 14 15:48:20.156: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 15:48:20.156: INFO: Found 1 / 1 Aug 14 15:48:20.156: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 14 15:48:20.163: INFO: Selector matched 1 pods for map[app:agnhost] Aug 14 15:48:20.163: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 14 15:48:20.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe pod agnhost-master-bszzz --namespace=kubectl-176' Aug 14 15:48:21.522: INFO: stderr: "" Aug 14 15:48:21.522: INFO: stdout: "Name: agnhost-master-bszzz\nNamespace: kubectl-176\nPriority: 0\nNode: kali-worker/172.18.0.13\nStart Time: Fri, 14 Aug 2020 15:48:16 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.181\nIPs:\n IP: 10.244.2.181\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://e908a6f0ee0e6868bce6e9d1b822207d8689ec9741a498a0dffb77bc140b9626\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 14 Aug 2020 15:48:18 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-xbxdh (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-xbxdh:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-xbxdh\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-176/agnhost-master-bszzz to kali-worker\n Normal Pulled 4s kubelet, kali-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 3s kubelet, kali-worker Created container agnhost-master\n Normal Started 2s kubelet, kali-worker Started container agnhost-master\n" Aug 14 15:48:21.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-176' Aug 14 15:48:23.011: INFO: stderr: "" Aug 14 15:48:23.012: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-176\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-master-bszzz\n" Aug 14 15:48:23.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-176' Aug 14 15:48:24.303: INFO: stderr: "" Aug 14 15:48:24.303: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-176\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.99.101.33\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.181:6379\nSession Affinity: None\nEvents: \n" Aug 14 15:48:24.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe node kali-control-plane' Aug 14 15:48:26.030: INFO: stderr: "" Aug 14 15:48:26.031: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jul 2020 10:27:46 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Fri, 14 Aug 2020 15:48:22 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 14 Aug 2020 15:43:42 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 14 Aug 2020 15:43:42 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 14 Aug 2020 15:43:42 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 14 Aug 2020 15:43:42 +0000 Fri, 10 Jul 2020 10:28:23 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.16\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: d83d42c4b42d4de1b3233683d9cadf95\n System UUID: e06c57c7-ce4f-4ae9-8bb6-40f1dc0e1a64\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-34-g49b0743c\n Kubelet Version: v1.18.4\n Kube-Proxy Version: v1.18.4\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-qtcqs 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 35d\n kube-system coredns-66bff467f8-tjkg9 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 35d\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 35d\n kube-system kindnet-zxw2f 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 35d\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 35d\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 35d\n kube-system kube-proxy-xmqbs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 35d\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 35d\n local-path-storage local-path-provisioner-67795f75bd-clsb6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 35d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Aug 14 15:48:26.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe namespace kubectl-176' Aug 14 15:48:28.459: INFO: stderr: "" Aug 14 15:48:28.459: INFO: stdout: "Name: kubectl-176\nLabels: e2e-framework=kubectl\n e2e-run=431c97d7-cdd1-4751-91cc-ae0ea0a5123c\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:48:28.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-176" for this suite. • [SLOW TEST:13.973 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":233,"skipped":3953,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:48:28.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 14 15:48:30.032: INFO: Waiting up to 5m0s for pod "pod-7e135901-8caf-475b-afaf-fd3fb949c434" in namespace "emptydir-223" to be "Succeeded or Failed" Aug 14 15:48:30.265: INFO: Pod "pod-7e135901-8caf-475b-afaf-fd3fb949c434": Phase="Pending", Reason="", readiness=false. Elapsed: 232.125419ms Aug 14 15:48:32.533: INFO: Pod "pod-7e135901-8caf-475b-afaf-fd3fb949c434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.500229164s Aug 14 15:48:35.018: INFO: Pod "pod-7e135901-8caf-475b-afaf-fd3fb949c434": Phase="Pending", Reason="", readiness=false. Elapsed: 4.985350119s Aug 14 15:48:37.212: INFO: Pod "pod-7e135901-8caf-475b-afaf-fd3fb949c434": Phase="Running", Reason="", readiness=true. Elapsed: 7.179552315s Aug 14 15:48:39.219: INFO: Pod "pod-7e135901-8caf-475b-afaf-fd3fb949c434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.186498913s STEP: Saw pod success Aug 14 15:48:39.219: INFO: Pod "pod-7e135901-8caf-475b-afaf-fd3fb949c434" satisfied condition "Succeeded or Failed" Aug 14 15:48:39.225: INFO: Trying to get logs from node kali-worker pod pod-7e135901-8caf-475b-afaf-fd3fb949c434 container test-container: STEP: delete the pod Aug 14 15:48:39.297: INFO: Waiting for pod pod-7e135901-8caf-475b-afaf-fd3fb949c434 to disappear Aug 14 15:48:39.544: INFO: Pod pod-7e135901-8caf-475b-afaf-fd3fb949c434 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:48:39.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-223" for this suite. • [SLOW TEST:11.080 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3982,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:48:39.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 14 15:48:44.075: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 14 15:48:46.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016924, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016924, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016924, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733016923, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 14 15:48:49.137: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 14 15:48:49.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9392" for this suite. STEP: Destroying namespace "webhook-9392-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.900 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":235,"skipped":3983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 14 15:48:49.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 14 15:48:49.571: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-937c013b-73ac-4ff5-b57e-c6b9bb9593ae
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:48:50.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4390" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":237,"skipped":4020,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:48:50.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 14 15:48:50.709: INFO: Waiting up to 5m0s for pod "pod-e4991156-65b0-4a55-9aac-01f04797c344" in namespace "emptydir-3635" to be "Succeeded or Failed"
Aug 14 15:48:50.770: INFO: Pod "pod-e4991156-65b0-4a55-9aac-01f04797c344": Phase="Pending", Reason="", readiness=false. Elapsed: 60.627847ms
Aug 14 15:48:52.778: INFO: Pod "pod-e4991156-65b0-4a55-9aac-01f04797c344": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068768503s
Aug 14 15:48:54.793: INFO: Pod "pod-e4991156-65b0-4a55-9aac-01f04797c344": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083438829s
STEP: Saw pod success
Aug 14 15:48:54.793: INFO: Pod "pod-e4991156-65b0-4a55-9aac-01f04797c344" satisfied condition "Succeeded or Failed"
Aug 14 15:48:54.798: INFO: Trying to get logs from node kali-worker pod pod-e4991156-65b0-4a55-9aac-01f04797c344 container test-container: 
STEP: delete the pod
Aug 14 15:48:55.190: INFO: Waiting for pod pod-e4991156-65b0-4a55-9aac-01f04797c344 to disappear
Aug 14 15:48:55.254: INFO: Pod pod-e4991156-65b0-4a55-9aac-01f04797c344 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:48:55.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3635" for this suite.

• [SLOW TEST:5.023 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4028,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:48:55.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 14 15:48:56.239: INFO: >>> kubeConfig: /root/.kube/config
Aug 14 15:49:07.060: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:50:26.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5582" for this suite.

• [SLOW TEST:91.385 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":239,"skipped":4090,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:50:26.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:50:27.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6153" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":240,"skipped":4093,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:50:27.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3844
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3844
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3844
Aug 14 15:50:27.281: INFO: Found 0 stateful pods, waiting for 1
Aug 14 15:50:37.290: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 14 15:50:37.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 14 15:50:38.772: INFO: stderr: "I0814 15:50:38.633399    2950 log.go:172] (0x400003a210) (0x40006ea000) Create stream\nI0814 15:50:38.636336    2950 log.go:172] (0x400003a210) (0x40006ea000) Stream added, broadcasting: 1\nI0814 15:50:38.648886    2950 log.go:172] (0x400003a210) Reply frame received for 1\nI0814 15:50:38.649706    2950 log.go:172] (0x400003a210) (0x4000766000) Create stream\nI0814 15:50:38.649779    2950 log.go:172] (0x400003a210) (0x4000766000) Stream added, broadcasting: 3\nI0814 15:50:38.651222    2950 log.go:172] (0x400003a210) Reply frame received for 3\nI0814 15:50:38.651528    2950 log.go:172] (0x400003a210) (0x400076e000) Create stream\nI0814 15:50:38.651592    2950 log.go:172] (0x400003a210) (0x400076e000) Stream added, broadcasting: 5\nI0814 15:50:38.653077    2950 log.go:172] (0x400003a210) Reply frame received for 5\nI0814 15:50:38.710196    2950 log.go:172] (0x400003a210) Data frame received for 5\nI0814 15:50:38.710602    2950 log.go:172] (0x400076e000) (5) Data frame handling\nI0814 15:50:38.711411    2950 log.go:172] (0x400076e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0814 15:50:38.750170    2950 log.go:172] (0x400003a210) Data frame received for 3\nI0814 15:50:38.750389    2950 log.go:172] (0x400003a210) Data frame received for 5\nI0814 15:50:38.750561    2950 log.go:172] (0x400076e000) (5) Data frame handling\nI0814 15:50:38.750676    2950 log.go:172] (0x4000766000) (3) Data frame handling\nI0814 15:50:38.750806    2950 log.go:172] (0x4000766000) (3) Data frame sent\nI0814 15:50:38.750922    2950 log.go:172] (0x400003a210) Data frame received for 3\nI0814 15:50:38.751027    2950 log.go:172] (0x4000766000) (3) Data frame handling\nI0814 15:50:38.752593    2950 log.go:172] (0x400003a210) Data frame received for 1\nI0814 15:50:38.752813    2950 log.go:172] (0x40006ea000) (1) Data frame handling\nI0814 15:50:38.752960    2950 log.go:172] (0x40006ea000) (1) Data frame sent\nI0814 15:50:38.754970    2950 log.go:172] (0x400003a210) (0x40006ea000) Stream removed, broadcasting: 1\nI0814 15:50:38.758674    2950 log.go:172] (0x400003a210) Go away received\nI0814 15:50:38.761446    2950 log.go:172] (0x400003a210) (0x40006ea000) Stream removed, broadcasting: 1\nI0814 15:50:38.763489    2950 log.go:172] (0x400003a210) (0x4000766000) Stream removed, broadcasting: 3\nI0814 15:50:38.763676    2950 log.go:172] (0x400003a210) (0x400076e000) Stream removed, broadcasting: 5\n"
Aug 14 15:50:38.774: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 14 15:50:38.774: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 14 15:50:38.809: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 14 15:50:48.819: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 15:50:48.819: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 15:50:48.842: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999966001s
Aug 14 15:50:49.850: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991582126s
Aug 14 15:50:50.857: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.983774505s
Aug 14 15:50:51.867: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.975648839s
Aug 14 15:50:52.875: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.966375593s
Aug 14 15:50:53.883: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.95797194s
Aug 14 15:50:54.889: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.950803608s
Aug 14 15:50:55.898: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.943807028s
Aug 14 15:50:56.906: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.935529492s
Aug 14 15:50:57.914: INFO: Verifying statefulset ss doesn't scale past 1 for another 927.327217ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3844
Aug 14 15:50:58.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:51:01.139: INFO: stderr: "I0814 15:51:01.042968    2974 log.go:172] (0x4000a54000) (0x4000a78000) Create stream\nI0814 15:51:01.046179    2974 log.go:172] (0x4000a54000) (0x4000a78000) Stream added, broadcasting: 1\nI0814 15:51:01.058919    2974 log.go:172] (0x4000a54000) Reply frame received for 1\nI0814 15:51:01.059606    2974 log.go:172] (0x4000a54000) (0x4000a780a0) Create stream\nI0814 15:51:01.059685    2974 log.go:172] (0x4000a54000) (0x4000a780a0) Stream added, broadcasting: 3\nI0814 15:51:01.061369    2974 log.go:172] (0x4000a54000) Reply frame received for 3\nI0814 15:51:01.061612    2974 log.go:172] (0x4000a54000) (0x4000a78140) Create stream\nI0814 15:51:01.061686    2974 log.go:172] (0x4000a54000) (0x4000a78140) Stream added, broadcasting: 5\nI0814 15:51:01.063425    2974 log.go:172] (0x4000a54000) Reply frame received for 5\nI0814 15:51:01.117185    2974 log.go:172] (0x4000a54000) Data frame received for 3\nI0814 15:51:01.117399    2974 log.go:172] (0x4000a54000) Data frame received for 1\nI0814 15:51:01.117676    2974 log.go:172] (0x4000a54000) Data frame received for 5\nI0814 15:51:01.117868    2974 log.go:172] (0x4000a78140) (5) Data frame handling\nI0814 15:51:01.118074    2974 log.go:172] (0x4000a780a0) (3) Data frame handling\nI0814 15:51:01.118264    2974 log.go:172] (0x4000a78000) (1) Data frame handling\nI0814 15:51:01.119091    2974 log.go:172] (0x4000a780a0) (3) Data frame sent\nI0814 15:51:01.119493    2974 log.go:172] (0x4000a54000) Data frame received for 3\nI0814 15:51:01.119561    2974 log.go:172] (0x4000a780a0) (3) Data frame handling\nI0814 15:51:01.119768    2974 log.go:172] (0x4000a78140) (5) Data frame sent\nI0814 15:51:01.119905    2974 log.go:172] (0x4000a54000) Data frame received for 5\nI0814 15:51:01.119985    2974 log.go:172] (0x4000a78140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0814 15:51:01.120878    2974 log.go:172] (0x4000a78000) (1) Data frame sent\nI0814 15:51:01.122569    2974 log.go:172] (0x4000a54000) (0x4000a78000) Stream removed, broadcasting: 1\nI0814 15:51:01.124660    2974 log.go:172] (0x4000a54000) Go away received\nI0814 15:51:01.128573    2974 log.go:172] (0x4000a54000) (0x4000a78000) Stream removed, broadcasting: 1\nI0814 15:51:01.129064    2974 log.go:172] (0x4000a54000) (0x4000a780a0) Stream removed, broadcasting: 3\nI0814 15:51:01.129332    2974 log.go:172] (0x4000a54000) (0x4000a78140) Stream removed, broadcasting: 5\n"
Aug 14 15:51:01.139: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 14 15:51:01.139: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 14 15:51:01.224: INFO: Found 1 stateful pods, waiting for 3
Aug 14 15:51:11.235: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 15:51:11.235: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 15:51:11.235: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 14 15:51:21.232: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 15:51:21.232: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 15:51:21.233: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 14 15:51:21.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 14 15:51:23.284: INFO: stderr: "I0814 15:51:23.168225    2997 log.go:172] (0x40000ea4d0) (0x400097c140) Create stream\nI0814 15:51:23.172187    2997 log.go:172] (0x40000ea4d0) (0x400097c140) Stream added, broadcasting: 1\nI0814 15:51:23.187376    2997 log.go:172] (0x40000ea4d0) Reply frame received for 1\nI0814 15:51:23.188603    2997 log.go:172] (0x40000ea4d0) (0x400097c1e0) Create stream\nI0814 15:51:23.188779    2997 log.go:172] (0x40000ea4d0) (0x400097c1e0) Stream added, broadcasting: 3\nI0814 15:51:23.191342    2997 log.go:172] (0x40000ea4d0) Reply frame received for 3\nI0814 15:51:23.191751    2997 log.go:172] (0x40000ea4d0) (0x400081d360) Create stream\nI0814 15:51:23.191862    2997 log.go:172] (0x40000ea4d0) (0x400081d360) Stream added, broadcasting: 5\nI0814 15:51:23.194005    2997 log.go:172] (0x40000ea4d0) Reply frame received for 5\nI0814 15:51:23.264922    2997 log.go:172] (0x40000ea4d0) Data frame received for 5\nI0814 15:51:23.265297    2997 log.go:172] (0x40000ea4d0) Data frame received for 3\nI0814 15:51:23.265410    2997 log.go:172] (0x400097c1e0) (3) Data frame handling\nI0814 15:51:23.265505    2997 log.go:172] (0x400081d360) (5) Data frame handling\nI0814 15:51:23.266133    2997 log.go:172] (0x400097c1e0) (3) Data frame sent\nI0814 15:51:23.266252    2997 log.go:172] (0x400081d360) (5) Data frame sent\nI0814 15:51:23.266489    2997 log.go:172] (0x40000ea4d0) Data frame received for 5\nI0814 15:51:23.266595    2997 log.go:172] (0x400081d360) (5) Data frame handling\nI0814 15:51:23.266871    2997 log.go:172] (0x40000ea4d0) Data frame received for 1\nI0814 15:51:23.266947    2997 log.go:172] (0x400097c140) (1) Data frame handling\nI0814 15:51:23.267018    2997 log.go:172] (0x400097c140) (1) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0814 15:51:23.267179    2997 log.go:172] (0x40000ea4d0) Data frame received for 3\nI0814 15:51:23.267267    2997 log.go:172] (0x400097c1e0) (3) Data frame handling\nI0814 15:51:23.269428    2997 log.go:172] (0x40000ea4d0) (0x400097c140) Stream removed, broadcasting: 1\nI0814 15:51:23.271472    2997 log.go:172] (0x40000ea4d0) Go away received\nI0814 15:51:23.274535    2997 log.go:172] (0x40000ea4d0) (0x400097c140) Stream removed, broadcasting: 1\nI0814 15:51:23.274824    2997 log.go:172] (0x40000ea4d0) (0x400097c1e0) Stream removed, broadcasting: 3\nI0814 15:51:23.274983    2997 log.go:172] (0x40000ea4d0) (0x400081d360) Stream removed, broadcasting: 5\n"
Aug 14 15:51:23.285: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 14 15:51:23.285: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 14 15:51:23.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 14 15:51:24.936: INFO: stderr: "I0814 15:51:24.764153    3020 log.go:172] (0x400003a370) (0x40007fd5e0) Create stream\nI0814 15:51:24.767212    3020 log.go:172] (0x400003a370) (0x40007fd5e0) Stream added, broadcasting: 1\nI0814 15:51:24.780199    3020 log.go:172] (0x400003a370) Reply frame received for 1\nI0814 15:51:24.780939    3020 log.go:172] (0x400003a370) (0x4000764000) Create stream\nI0814 15:51:24.781014    3020 log.go:172] (0x400003a370) (0x4000764000) Stream added, broadcasting: 3\nI0814 15:51:24.782881    3020 log.go:172] (0x400003a370) Reply frame received for 3\nI0814 15:51:24.783310    3020 log.go:172] (0x400003a370) (0x40007fd680) Create stream\nI0814 15:51:24.783438    3020 log.go:172] (0x400003a370) (0x40007fd680) Stream added, broadcasting: 5\nI0814 15:51:24.785508    3020 log.go:172] (0x400003a370) Reply frame received for 5\nI0814 15:51:24.850268    3020 log.go:172] (0x400003a370) Data frame received for 5\nI0814 15:51:24.850454    3020 log.go:172] (0x40007fd680) (5) Data frame handling\nI0814 15:51:24.850804    3020 log.go:172] (0x40007fd680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0814 15:51:24.912204    3020 log.go:172] (0x400003a370) Data frame received for 3\nI0814 15:51:24.912459    3020 log.go:172] (0x4000764000) (3) Data frame handling\nI0814 15:51:24.912620    3020 log.go:172] (0x4000764000) (3) Data frame sent\nI0814 15:51:24.912912    3020 log.go:172] (0x400003a370) Data frame received for 3\nI0814 15:51:24.913111    3020 log.go:172] (0x4000764000) (3) Data frame handling\nI0814 15:51:24.913780    3020 log.go:172] (0x400003a370) Data frame received for 5\nI0814 15:51:24.914038    3020 log.go:172] (0x400003a370) Data frame received for 1\nI0814 15:51:24.914246    3020 log.go:172] (0x40007fd5e0) (1) Data frame handling\nI0814 15:51:24.914397    3020 log.go:172] (0x40007fd5e0) (1) Data frame sent\nI0814 15:51:24.914562    3020 log.go:172] (0x40007fd680) (5) Data frame handling\nI0814 15:51:24.916882    3020 log.go:172] (0x400003a370) (0x40007fd5e0) Stream removed, broadcasting: 1\nI0814 15:51:24.919911    3020 log.go:172] (0x400003a370) Go away received\nI0814 15:51:24.923634    3020 log.go:172] (0x400003a370) (0x40007fd5e0) Stream removed, broadcasting: 1\nI0814 15:51:24.923904    3020 log.go:172] (0x400003a370) (0x4000764000) Stream removed, broadcasting: 3\nI0814 15:51:24.924093    3020 log.go:172] (0x400003a370) (0x40007fd680) Stream removed, broadcasting: 5\n"
Aug 14 15:51:24.936: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 14 15:51:24.937: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 14 15:51:24.937: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 14 15:51:26.456: INFO: stderr: "I0814 15:51:26.270944    3043 log.go:172] (0x400003a420) (0x4000b881e0) Create stream\nI0814 15:51:26.273700    3043 log.go:172] (0x400003a420) (0x4000b881e0) Stream added, broadcasting: 1\nI0814 15:51:26.288042    3043 log.go:172] (0x400003a420) Reply frame received for 1\nI0814 15:51:26.288809    3043 log.go:172] (0x400003a420) (0x40007f3220) Create stream\nI0814 15:51:26.288887    3043 log.go:172] (0x400003a420) (0x40007f3220) Stream added, broadcasting: 3\nI0814 15:51:26.290308    3043 log.go:172] (0x400003a420) Reply frame received for 3\nI0814 15:51:26.290666    3043 log.go:172] (0x400003a420) (0x40009a8000) Create stream\nI0814 15:51:26.290773    3043 log.go:172] (0x400003a420) (0x40009a8000) Stream added, broadcasting: 5\nI0814 15:51:26.292122    3043 log.go:172] (0x400003a420) Reply frame received for 5\nI0814 15:51:26.380307    3043 log.go:172] (0x400003a420) Data frame received for 5\nI0814 15:51:26.380581    3043 log.go:172] (0x40009a8000) (5) Data frame handling\nI0814 15:51:26.381042    3043 log.go:172] (0x40009a8000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0814 15:51:26.436499    3043 log.go:172] (0x400003a420) Data frame received for 5\nI0814 15:51:26.436679    3043 log.go:172] (0x40009a8000) (5) Data frame handling\nI0814 15:51:26.437032    3043 log.go:172] (0x400003a420) Data frame received for 3\nI0814 15:51:26.437237    3043 log.go:172] (0x40007f3220) (3) Data frame handling\nI0814 15:51:26.437425    3043 log.go:172] (0x40007f3220) (3) Data frame sent\nI0814 15:51:26.437583    3043 log.go:172] (0x400003a420) Data frame received for 3\nI0814 15:51:26.437739    3043 log.go:172] (0x40007f3220) (3) Data frame handling\nI0814 15:51:26.439391    3043 log.go:172] (0x400003a420) Data frame received for 1\nI0814 15:51:26.439532    3043 log.go:172] (0x4000b881e0) (1) Data frame handling\nI0814 15:51:26.439669    3043 log.go:172] (0x4000b881e0) (1) Data frame sent\nI0814 15:51:26.441744    3043 log.go:172] (0x400003a420) (0x4000b881e0) Stream removed, broadcasting: 1\nI0814 15:51:26.444221    3043 log.go:172] (0x400003a420) Go away received\nI0814 15:51:26.446430    3043 log.go:172] (0x400003a420) (0x4000b881e0) Stream removed, broadcasting: 1\nI0814 15:51:26.446831    3043 log.go:172] (0x400003a420) (0x40007f3220) Stream removed, broadcasting: 3\nI0814 15:51:26.447008    3043 log.go:172] (0x400003a420) (0x40009a8000) Stream removed, broadcasting: 5\n"
Aug 14 15:51:26.457: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 14 15:51:26.457: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 14 15:51:26.458: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 15:51:26.463: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 14 15:51:36.811: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 15:51:36.811: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 15:51:36.811: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 15:51:37.060: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999992847s
Aug 14 15:51:38.067: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.928618063s
Aug 14 15:51:39.075: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.921993336s
Aug 14 15:51:40.249: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.914044403s
Aug 14 15:51:41.260: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.739566363s
Aug 14 15:51:42.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.729167438s
Aug 14 15:51:43.276: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.720820814s
Aug 14 15:51:44.286: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.712541934s
Aug 14 15:51:45.294: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.702997557s
Aug 14 15:51:46.303: INFO: Verifying statefulset ss doesn't scale past 3 for another 694.687852ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3844
Aug 14 15:51:47.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:51:48.766: INFO: stderr: "I0814 15:51:48.663391    3066 log.go:172] (0x4000a9e000) (0x40009dc000) Create stream\nI0814 15:51:48.668652    3066 log.go:172] (0x4000a9e000) (0x40009dc000) Stream added, broadcasting: 1\nI0814 15:51:48.684064    3066 log.go:172] (0x4000a9e000) Reply frame received for 1\nI0814 15:51:48.685659    3066 log.go:172] (0x4000a9e000) (0x400080b360) Create stream\nI0814 15:51:48.685803    3066 log.go:172] (0x4000a9e000) (0x400080b360) Stream added, broadcasting: 3\nI0814 15:51:48.688007    3066 log.go:172] (0x4000a9e000) Reply frame received for 3\nI0814 15:51:48.688482    3066 log.go:172] (0x4000a9e000) (0x40006fe000) Create stream\nI0814 15:51:48.688577    3066 log.go:172] (0x4000a9e000) (0x40006fe000) Stream added, broadcasting: 5\nI0814 15:51:48.690498    3066 log.go:172] (0x4000a9e000) Reply frame received for 5\nI0814 15:51:48.743718    3066 log.go:172] (0x4000a9e000) Data frame received for 5\nI0814 15:51:48.744082    3066 log.go:172] (0x4000a9e000) Data frame received for 3\nI0814 15:51:48.744321    3066 log.go:172] (0x400080b360) (3) Data frame handling\nI0814 15:51:48.744534    3066 log.go:172] (0x4000a9e000) Data frame received for 1\nI0814 15:51:48.744694    3066 log.go:172] (0x40009dc000) (1) Data frame handling\nI0814 15:51:48.745190    3066 log.go:172] (0x40006fe000) (5) Data frame handling\nI0814 15:51:48.746335    3066 log.go:172] (0x40009dc000) (1) Data frame sent\nI0814 15:51:48.746601    3066 log.go:172] (0x400080b360) (3) Data frame sent\nI0814 15:51:48.746696    3066 log.go:172] (0x4000a9e000) Data frame received for 3\nI0814 15:51:48.746757    3066 log.go:172] (0x400080b360) (3) Data frame handling\nI0814 15:51:48.747133    3066 log.go:172] (0x40006fe000) (5) Data frame sent\nI0814 15:51:48.747207    3066 log.go:172] (0x4000a9e000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0814 15:51:48.749241    3066 log.go:172] (0x4000a9e000) (0x40009dc000) Stream removed, broadcasting: 1\nI0814 15:51:48.752290    3066 log.go:172] (0x40006fe000) (5) Data frame handling\nI0814 15:51:48.752612    3066 log.go:172] (0x4000a9e000) Go away received\nI0814 15:51:48.756130    3066 log.go:172] (0x4000a9e000) (0x40009dc000) Stream removed, broadcasting: 1\nI0814 15:51:48.756546    3066 log.go:172] (0x4000a9e000) (0x400080b360) Stream removed, broadcasting: 3\nI0814 15:51:48.756923    3066 log.go:172] (0x4000a9e000) (0x40006fe000) Stream removed, broadcasting: 5\n"
Aug 14 15:51:48.767: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 14 15:51:48.767: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 14 15:51:48.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:51:50.209: INFO: stderr: "I0814 15:51:50.094474    3090 log.go:172] (0x4000ae4000) (0x40006e2000) Create stream\nI0814 15:51:50.097726    3090 log.go:172] (0x4000ae4000) (0x40006e2000) Stream added, broadcasting: 1\nI0814 15:51:50.109008    3090 log.go:172] (0x4000ae4000) Reply frame received for 1\nI0814 15:51:50.109618    3090 log.go:172] (0x4000ae4000) (0x40006e20a0) Create stream\nI0814 15:51:50.109684    3090 log.go:172] (0x4000ae4000) (0x40006e20a0) Stream added, broadcasting: 3\nI0814 15:51:50.110991    3090 log.go:172] (0x4000ae4000) Reply frame received for 3\nI0814 15:51:50.111277    3090 log.go:172] (0x4000ae4000) (0x4000734000) Create stream\nI0814 15:51:50.111341    3090 log.go:172] (0x4000ae4000) (0x4000734000) Stream added, broadcasting: 5\nI0814 15:51:50.112301    3090 log.go:172] (0x4000ae4000) Reply frame received for 5\nI0814 15:51:50.188676    3090 log.go:172] (0x4000ae4000) Data frame received for 3\nI0814 15:51:50.189264    3090 log.go:172] (0x4000ae4000) Data frame received for 5\nI0814 15:51:50.189440    3090 log.go:172] (0x4000734000) (5) Data frame handling\nI0814 15:51:50.189891    3090 log.go:172] (0x4000ae4000) Data frame received for 1\nI0814 15:51:50.190003    3090 log.go:172] (0x40006e2000) (1) Data frame handling\nI0814 15:51:50.190166    3090 log.go:172] (0x40006e20a0) (3) Data frame handling\nI0814 15:51:50.190848    3090 log.go:172] (0x40006e20a0) (3) Data frame sent\nI0814 15:51:50.191241    3090 log.go:172] (0x4000ae4000) Data frame received for 3\nI0814 15:51:50.191305    3090 log.go:172] (0x40006e20a0) (3) Data frame handling\nI0814 15:51:50.191358    3090 log.go:172] (0x40006e2000) (1) Data frame sent\nI0814 15:51:50.191518    3090 log.go:172] (0x4000734000) (5) Data frame sent\nI0814 15:51:50.191636    3090 log.go:172] (0x4000ae4000) Data frame received for 5\nI0814 15:51:50.191718    3090 log.go:172] (0x4000734000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0814 15:51:50.193401    3090 log.go:172] (0x4000ae4000) (0x40006e2000) Stream removed, broadcasting: 1\nI0814 15:51:50.195532    3090 log.go:172] (0x4000ae4000) Go away received\nI0814 15:51:50.198263    3090 log.go:172] (0x4000ae4000) (0x40006e2000) Stream removed, broadcasting: 1\nI0814 15:51:50.198561    3090 log.go:172] (0x4000ae4000) (0x40006e20a0) Stream removed, broadcasting: 3\nI0814 15:51:50.198761    3090 log.go:172] (0x4000ae4000) (0x4000734000) Stream removed, broadcasting: 5\n"
Aug 14 15:51:50.210: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 14 15:51:50.210: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 14 15:51:50.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:51:51.614: INFO: rc: 1
Aug 14 15:51:51.615: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 14 15:52:01.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:52:03.232: INFO: rc: 1
Aug 14 15:52:03.233: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 14 15:52:13.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:52:14.706: INFO: rc: 1
Aug 14 15:52:14.706: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:52:24.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:52:25.939: INFO: rc: 1
Aug 14 15:52:25.939: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:52:35.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:52:37.171: INFO: rc: 1
Aug 14 15:52:37.172: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:52:47.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:52:48.434: INFO: rc: 1
Aug 14 15:52:48.435: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:52:58.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:52:59.660: INFO: rc: 1
Aug 14 15:52:59.661: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:53:09.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:53:10.889: INFO: rc: 1
Aug 14 15:53:10.889: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:53:20.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:53:22.138: INFO: rc: 1
Aug 14 15:53:22.138: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:53:32.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:53:33.378: INFO: rc: 1
Aug 14 15:53:33.378: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:53:43.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:53:44.599: INFO: rc: 1
Aug 14 15:53:44.599: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:53:54.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:53:55.789: INFO: rc: 1
Aug 14 15:53:55.789: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:54:05.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:54:06.973: INFO: rc: 1
Aug 14 15:54:06.973: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:54:16.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:54:18.329: INFO: rc: 1
Aug 14 15:54:18.329: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:54:28.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:54:29.712: INFO: rc: 1
Aug 14 15:54:29.713: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:54:39.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:54:40.876: INFO: rc: 1
Aug 14 15:54:40.876: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:54:50.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:54:52.082: INFO: rc: 1
Aug 14 15:54:52.083: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:55:02.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:55:03.339: INFO: rc: 1
Aug 14 15:55:03.340: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:55:13.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:55:14.668: INFO: rc: 1
Aug 14 15:55:14.668: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:55:24.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:55:25.955: INFO: rc: 1
Aug 14 15:55:25.955: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:55:35.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:55:37.278: INFO: rc: 1
Aug 14 15:55:37.278: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:55:47.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:55:48.622: INFO: rc: 1
Aug 14 15:55:48.622: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:55:58.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:56:00.152: INFO: rc: 1
Aug 14 15:56:00.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:56:10.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:56:11.402: INFO: rc: 1
Aug 14 15:56:11.402: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:56:21.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:56:26.164: INFO: rc: 1
Aug 14 15:56:26.165: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:56:36.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:56:37.720: INFO: rc: 1
Aug 14 15:56:37.721: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:56:47.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:56:48.967: INFO: rc: 1
Aug 14 15:56:48.967: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 14 15:56:58.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 15:57:00.214: INFO: rc: 1
Aug 14 15:57:00.214: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Aug 14 15:57:00.214: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 14 15:57:00.240: INFO: Deleting all statefulset in ns statefulset-3844
Aug 14 15:57:00.243: INFO: Scaling statefulset ss to 0
Aug 14 15:57:00.258: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 15:57:00.262: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:57:00.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3844" for this suite.

• [SLOW TEST:393.274 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":241,"skipped":4100,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:57:00.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 14 15:57:00.415: INFO: Waiting up to 5m0s for pod "downward-api-04a2f6fb-37c5-436d-b2b5-21cbc3b8a53f" in namespace "downward-api-3904" to be "Succeeded or Failed"
Aug 14 15:57:00.431: INFO: Pod "downward-api-04a2f6fb-37c5-436d-b2b5-21cbc3b8a53f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.774801ms
Aug 14 15:57:02.843: INFO: Pod "downward-api-04a2f6fb-37c5-436d-b2b5-21cbc3b8a53f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428376796s
Aug 14 15:57:04.852: INFO: Pod "downward-api-04a2f6fb-37c5-436d-b2b5-21cbc3b8a53f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437100161s
STEP: Saw pod success
Aug 14 15:57:04.852: INFO: Pod "downward-api-04a2f6fb-37c5-436d-b2b5-21cbc3b8a53f" satisfied condition "Succeeded or Failed"
Aug 14 15:57:04.890: INFO: Trying to get logs from node kali-worker pod downward-api-04a2f6fb-37c5-436d-b2b5-21cbc3b8a53f container dapi-container: 
STEP: delete the pod
Aug 14 15:57:04.965: INFO: Waiting for pod downward-api-04a2f6fb-37c5-436d-b2b5-21cbc3b8a53f to disappear
Aug 14 15:57:04.994: INFO: Pod downward-api-04a2f6fb-37c5-436d-b2b5-21cbc3b8a53f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:57:04.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3904" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4124,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:57:05.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 14 15:57:07.509: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 14 15:57:09.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733017427, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733017427, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733017427, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733017427, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 14 15:57:11.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733017427, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733017427, loc:(*time.Location)(0x747e900)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733017427, loc:(*time.Location)(0x747e900)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733017427, loc:(*time.Location)(0x747e900)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 14 15:57:15.112: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:57:15.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-929" for this suite.
STEP: Destroying namespace "webhook-929-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.076 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":243,"skipped":4134,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:57:16.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-5b661856-c29c-496e-9c27-80afa05fe43a
STEP: Creating a pod to test consume secrets
Aug 14 15:57:16.960: INFO: Waiting up to 5m0s for pod "pod-secrets-a836ffa1-b87a-4917-9522-a6e4756f1689" in namespace "secrets-5766" to be "Succeeded or Failed"
Aug 14 15:57:16.984: INFO: Pod "pod-secrets-a836ffa1-b87a-4917-9522-a6e4756f1689": Phase="Pending", Reason="", readiness=false. Elapsed: 23.900207ms
Aug 14 15:57:19.266: INFO: Pod "pod-secrets-a836ffa1-b87a-4917-9522-a6e4756f1689": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30621366s
Aug 14 15:57:21.381: INFO: Pod "pod-secrets-a836ffa1-b87a-4917-9522-a6e4756f1689": Phase="Running", Reason="", readiness=true. Elapsed: 4.421764547s
Aug 14 15:57:23.388: INFO: Pod "pod-secrets-a836ffa1-b87a-4917-9522-a6e4756f1689": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.428461566s
STEP: Saw pod success
Aug 14 15:57:23.389: INFO: Pod "pod-secrets-a836ffa1-b87a-4917-9522-a6e4756f1689" satisfied condition "Succeeded or Failed"
Aug 14 15:57:23.394: INFO: Trying to get logs from node kali-worker pod pod-secrets-a836ffa1-b87a-4917-9522-a6e4756f1689 container secret-volume-test: 
STEP: delete the pod
Aug 14 15:57:23.421: INFO: Waiting for pod pod-secrets-a836ffa1-b87a-4917-9522-a6e4756f1689 to disappear
Aug 14 15:57:23.482: INFO: Pod pod-secrets-a836ffa1-b87a-4917-9522-a6e4756f1689 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:57:23.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5766" for this suite.

• [SLOW TEST:7.642 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4159,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:57:23.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:57:24.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2233" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":245,"skipped":4167,"failed":0}

------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:57:24.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 14 15:57:24.405: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:57:43.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6326" for this suite.

• [SLOW TEST:19.767 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4167,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:57:44.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
Aug 14 15:57:44.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9832'
Aug 14 15:57:46.348: INFO: stderr: ""
Aug 14 15:57:46.349: INFO: stdout: "pod/pause created\n"
Aug 14 15:57:46.349: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 14 15:57:46.349: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9832" to be "running and ready"
Aug 14 15:57:46.424: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 74.659587ms
Aug 14 15:57:48.432: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082593905s
Aug 14 15:57:50.447: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.098335344s
Aug 14 15:57:50.448: INFO: Pod "pause" satisfied condition "running and ready"
Aug 14 15:57:50.448: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 14 15:57:50.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9832'
Aug 14 15:57:51.692: INFO: stderr: ""
Aug 14 15:57:51.692: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 14 15:57:51.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9832'
Aug 14 15:57:52.910: INFO: stderr: ""
Aug 14 15:57:52.910: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          6s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 14 15:57:52.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9832'
Aug 14 15:57:54.190: INFO: stderr: ""
Aug 14 15:57:54.190: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 14 15:57:54.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9832'
Aug 14 15:57:55.413: INFO: stderr: ""
Aug 14 15:57:55.413: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
Aug 14 15:57:55.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9832'
Aug 14 15:57:57.145: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 14 15:57:57.145: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 14 15:57:57.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9832'
Aug 14 15:57:58.441: INFO: stderr: "No resources found in kubectl-9832 namespace.\n"
Aug 14 15:57:58.441: INFO: stdout: ""
Aug 14 15:57:58.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9832 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 14 15:57:59.708: INFO: stderr: ""
Aug 14 15:57:59.709: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:57:59.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9832" for this suite.

• [SLOW TEST:15.694 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":247,"skipped":4208,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:57:59.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 14 15:58:08.011: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 15:58:08.057: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 15:58:10.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 15:58:10.063: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 15:58:12.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 15:58:12.062: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 15:58:14.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 15:58:14.064: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 15:58:16.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 15:58:16.062: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 15:58:18.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 15:58:18.065: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 15:58:20.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 15:58:20.064: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 15:58:22.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 15:58:22.063: INFO: Pod pod-with-poststart-http-hook still exists
Aug 14 15:58:24.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 14 15:58:24.063: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:58:24.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7838" for this suite.

• [SLOW TEST:24.353 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4250,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:58:24.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 14 15:58:24.263: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9817c331-194e-4f74-a515-588f15aa30b8", Controller:(*bool)(0x400384b422), BlockOwnerDeletion:(*bool)(0x400384b423)}}
Aug 14 15:58:24.318: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"90eb3341-8cb2-4209-a6b3-db9f10a4958e", Controller:(*bool)(0x400384b61a), BlockOwnerDeletion:(*bool)(0x400384b61b)}}
Aug 14 15:58:24.329: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d0ec45d8-ea68-4f37-a5a2-5c5811449307", Controller:(*bool)(0x400384b80a), BlockOwnerDeletion:(*bool)(0x400384b80b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:58:29.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4269" for this suite.

• [SLOW TEST:5.312 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":249,"skipped":4265,"failed":0}
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:58:29.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 14 15:58:30.002: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 14 15:58:30.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99bffb14-0d70-41fe-b8b9-38d953e6d9dc" in namespace "downward-api-7444" to be "Succeeded or Failed"
Aug 14 15:58:30.499: INFO: Pod "downwardapi-volume-99bffb14-0d70-41fe-b8b9-38d953e6d9dc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.730419ms
Aug 14 15:58:32.574: INFO: Pod "downwardapi-volume-99bffb14-0d70-41fe-b8b9-38d953e6d9dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085711026s
Aug 14 15:58:34.801: INFO: Pod "downwardapi-volume-99bffb14-0d70-41fe-b8b9-38d953e6d9dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313425768s
Aug 14 15:58:36.813: INFO: Pod "downwardapi-volume-99bffb14-0d70-41fe-b8b9-38d953e6d9dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.325234363s
STEP: Saw pod success
Aug 14 15:58:36.813: INFO: Pod "downwardapi-volume-99bffb14-0d70-41fe-b8b9-38d953e6d9dc" satisfied condition "Succeeded or Failed"
Aug 14 15:58:36.921: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-99bffb14-0d70-41fe-b8b9-38d953e6d9dc container client-container: 
STEP: delete the pod
Aug 14 15:58:37.101: INFO: Waiting for pod downwardapi-volume-99bffb14-0d70-41fe-b8b9-38d953e6d9dc to disappear
Aug 14 15:58:37.158: INFO: Pod downwardapi-volume-99bffb14-0d70-41fe-b8b9-38d953e6d9dc no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:58:37.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7444" for this suite.

• [SLOW TEST:7.030 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4267,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:58:37.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-8b7340ac-b86e-4e2b-b858-a5db8e14c371
STEP: Creating a pod to test consume secrets
Aug 14 15:58:37.552: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eec26a48-2024-4b5b-8b64-391e9a19265f" in namespace "projected-9021" to be "Succeeded or Failed"
Aug 14 15:58:37.563: INFO: Pod "pod-projected-secrets-eec26a48-2024-4b5b-8b64-391e9a19265f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.263027ms
Aug 14 15:58:39.723: INFO: Pod "pod-projected-secrets-eec26a48-2024-4b5b-8b64-391e9a19265f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170884192s
Aug 14 15:58:41.806: INFO: Pod "pod-projected-secrets-eec26a48-2024-4b5b-8b64-391e9a19265f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253333711s
Aug 14 15:58:43.813: INFO: Pod "pod-projected-secrets-eec26a48-2024-4b5b-8b64-391e9a19265f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.26063593s
STEP: Saw pod success
Aug 14 15:58:43.813: INFO: Pod "pod-projected-secrets-eec26a48-2024-4b5b-8b64-391e9a19265f" satisfied condition "Succeeded or Failed"
Aug 14 15:58:43.827: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-eec26a48-2024-4b5b-8b64-391e9a19265f container projected-secret-volume-test: 
STEP: delete the pod
Aug 14 15:58:43.861: INFO: Waiting for pod pod-projected-secrets-eec26a48-2024-4b5b-8b64-391e9a19265f to disappear
Aug 14 15:58:44.129: INFO: Pod pod-projected-secrets-eec26a48-2024-4b5b-8b64-391e9a19265f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:58:44.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9021" for this suite.

• [SLOW TEST:6.967 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4309,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:58:44.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0814 15:58:44.971523      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 14 15:58:44.971: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:58:44.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-776" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":253,"skipped":4361,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:58:44.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-dd5e0543-8b4c-4465-9c80-5607faa911a5
STEP: Creating a pod to test consume secrets
Aug 14 15:58:45.101: INFO: Waiting up to 5m0s for pod "pod-secrets-770a5eff-c96a-43e0-84c5-85675e6ef9c6" in namespace "secrets-2109" to be "Succeeded or Failed"
Aug 14 15:58:45.157: INFO: Pod "pod-secrets-770a5eff-c96a-43e0-84c5-85675e6ef9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 56.441634ms
Aug 14 15:58:47.169: INFO: Pod "pod-secrets-770a5eff-c96a-43e0-84c5-85675e6ef9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067675006s
Aug 14 15:58:49.310: INFO: Pod "pod-secrets-770a5eff-c96a-43e0-84c5-85675e6ef9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209189862s
Aug 14 15:58:51.531: INFO: Pod "pod-secrets-770a5eff-c96a-43e0-84c5-85675e6ef9c6": Phase="Running", Reason="", readiness=true. Elapsed: 6.429695789s
Aug 14 15:58:53.535: INFO: Pod "pod-secrets-770a5eff-c96a-43e0-84c5-85675e6ef9c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.433938173s
STEP: Saw pod success
Aug 14 15:58:53.535: INFO: Pod "pod-secrets-770a5eff-c96a-43e0-84c5-85675e6ef9c6" satisfied condition "Succeeded or Failed"
Aug 14 15:58:53.538: INFO: Trying to get logs from node kali-worker pod pod-secrets-770a5eff-c96a-43e0-84c5-85675e6ef9c6 container secret-volume-test: 
STEP: delete the pod
Aug 14 15:58:53.633: INFO: Waiting for pod pod-secrets-770a5eff-c96a-43e0-84c5-85675e6ef9c6 to disappear
Aug 14 15:58:53.641: INFO: Pod pod-secrets-770a5eff-c96a-43e0-84c5-85675e6ef9c6 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:58:53.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2109" for this suite.

• [SLOW TEST:8.671 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4373,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:58:53.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Aug 14 15:58:53.722: INFO: Waiting up to 5m0s for pod "client-containers-31a874cf-d8a1-44d7-91ad-be37eb5322d8" in namespace "containers-6324" to be "Succeeded or Failed"
Aug 14 15:58:53.751: INFO: Pod "client-containers-31a874cf-d8a1-44d7-91ad-be37eb5322d8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.623936ms
Aug 14 15:58:55.758: INFO: Pod "client-containers-31a874cf-d8a1-44d7-91ad-be37eb5322d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036036466s
Aug 14 15:58:57.762: INFO: Pod "client-containers-31a874cf-d8a1-44d7-91ad-be37eb5322d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040736288s
Aug 14 15:58:59.768: INFO: Pod "client-containers-31a874cf-d8a1-44d7-91ad-be37eb5322d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046421873s
STEP: Saw pod success
Aug 14 15:58:59.768: INFO: Pod "client-containers-31a874cf-d8a1-44d7-91ad-be37eb5322d8" satisfied condition "Succeeded or Failed"
Aug 14 15:58:59.772: INFO: Trying to get logs from node kali-worker pod client-containers-31a874cf-d8a1-44d7-91ad-be37eb5322d8 container test-container: 
STEP: delete the pod
Aug 14 15:58:59.836: INFO: Waiting for pod client-containers-31a874cf-d8a1-44d7-91ad-be37eb5322d8 to disappear
Aug 14 15:58:59.883: INFO: Pod client-containers-31a874cf-d8a1-44d7-91ad-be37eb5322d8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:58:59.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6324" for this suite.

• [SLOW TEST:6.241 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4383,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:58:59.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 14 15:58:59.968: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 14 15:59:04.975: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:59:05.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9531" for this suite.

• [SLOW TEST:5.258 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":256,"skipped":4406,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:59:05.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 14 15:59:16.770: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 14 15:59:17.245: INFO: Pod pod-with-prestop-http-hook still exists
Aug 14 15:59:19.246: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 14 15:59:19.254: INFO: Pod pod-with-prestop-http-hook still exists
Aug 14 15:59:21.246: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 14 15:59:21.253: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:59:21.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2409" for this suite.

• [SLOW TEST:16.193 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4417,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:59:21.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-1252/configmap-test-33e397e0-e21d-4841-8a34-ed6313b98c4e
STEP: Creating a pod to test consume configMaps
Aug 14 15:59:21.486: INFO: Waiting up to 5m0s for pod "pod-configmaps-30df9ba9-cac9-4122-906b-a41ca91ceaf7" in namespace "configmap-1252" to be "Succeeded or Failed"
Aug 14 15:59:21.508: INFO: Pod "pod-configmaps-30df9ba9-cac9-4122-906b-a41ca91ceaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.916933ms
Aug 14 15:59:23.630: INFO: Pod "pod-configmaps-30df9ba9-cac9-4122-906b-a41ca91ceaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143924796s
Aug 14 15:59:25.638: INFO: Pod "pod-configmaps-30df9ba9-cac9-4122-906b-a41ca91ceaf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151925079s
STEP: Saw pod success
Aug 14 15:59:25.638: INFO: Pod "pod-configmaps-30df9ba9-cac9-4122-906b-a41ca91ceaf7" satisfied condition "Succeeded or Failed"
Aug 14 15:59:25.643: INFO: Trying to get logs from node kali-worker pod pod-configmaps-30df9ba9-cac9-4122-906b-a41ca91ceaf7 container env-test: 
STEP: delete the pod
Aug 14 15:59:25.828: INFO: Waiting for pod pod-configmaps-30df9ba9-cac9-4122-906b-a41ca91ceaf7 to disappear
Aug 14 15:59:25.846: INFO: Pod pod-configmaps-30df9ba9-cac9-4122-906b-a41ca91ceaf7 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:59:25.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1252" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4437,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:59:25.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 14 15:59:26.384: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6080 /api/v1/namespaces/watch-6080/configmaps/e2e-watch-test-resource-version d46d4a17-8a08-4d45-af7e-18dd829e0c9c 9565929 0 2020-08-14 15:59:26 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-08-14 15:59:26 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 14 15:59:26.386: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6080 /api/v1/namespaces/watch-6080/configmaps/e2e-watch-test-resource-version d46d4a17-8a08-4d45-af7e-18dd829e0c9c 9565930 0 2020-08-14 15:59:26 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-08-14 15:59:26 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:59:26.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6080" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":259,"skipped":4454,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:59:26.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 14 15:59:26.572: INFO: Waiting up to 5m0s for pod "pod-660f39ee-44a3-4955-9824-f79ec8dc820c" in namespace "emptydir-5021" to be "Succeeded or Failed"
Aug 14 15:59:26.643: INFO: Pod "pod-660f39ee-44a3-4955-9824-f79ec8dc820c": Phase="Pending", Reason="", readiness=false. Elapsed: 71.018114ms
Aug 14 15:59:28.693: INFO: Pod "pod-660f39ee-44a3-4955-9824-f79ec8dc820c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12020929s
Aug 14 15:59:30.700: INFO: Pod "pod-660f39ee-44a3-4955-9824-f79ec8dc820c": Phase="Running", Reason="", readiness=true. Elapsed: 4.127539529s
Aug 14 15:59:32.707: INFO: Pod "pod-660f39ee-44a3-4955-9824-f79ec8dc820c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134943029s
STEP: Saw pod success
Aug 14 15:59:32.708: INFO: Pod "pod-660f39ee-44a3-4955-9824-f79ec8dc820c" satisfied condition "Succeeded or Failed"
Aug 14 15:59:32.712: INFO: Trying to get logs from node kali-worker pod pod-660f39ee-44a3-4955-9824-f79ec8dc820c container test-container: 
STEP: delete the pod
Aug 14 15:59:32.779: INFO: Waiting for pod pod-660f39ee-44a3-4955-9824-f79ec8dc820c to disappear
Aug 14 15:59:32.788: INFO: Pod pod-660f39ee-44a3-4955-9824-f79ec8dc820c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:59:32.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5021" for this suite.

• [SLOW TEST:6.365 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4506,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:59:32.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 15:59:46.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5346" for this suite.

• [SLOW TEST:13.285 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":261,"skipped":4507,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 15:59:46.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:00:46.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6012" for this suite.

• [SLOW TEST:60.135 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4509,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:00:46.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:00:46.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-2869" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":263,"skipped":4521,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:00:46.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 14 16:00:46.673: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3561 /api/v1/namespaces/watch-3561/configmaps/e2e-watch-test-label-changed 34c97e87-f43f-47b3-9212-818b03cc1d30 9566268 0 2020-08-14 16:00:46 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-14 16:00:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 14 16:00:46.674: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3561 /api/v1/namespaces/watch-3561/configmaps/e2e-watch-test-label-changed 34c97e87-f43f-47b3-9212-818b03cc1d30 9566269 0 2020-08-14 16:00:46 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-14 16:00:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 14 16:00:46.676: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3561 /api/v1/namespaces/watch-3561/configmaps/e2e-watch-test-label-changed 34c97e87-f43f-47b3-9212-818b03cc1d30 9566270 0 2020-08-14 16:00:46 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-14 16:00:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 14 16:00:56.751: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3561 /api/v1/namespaces/watch-3561/configmaps/e2e-watch-test-label-changed 34c97e87-f43f-47b3-9212-818b03cc1d30 9566316 0 2020-08-14 16:00:46 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-14 16:00:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 14 16:00:56.752: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3561 /api/v1/namespaces/watch-3561/configmaps/e2e-watch-test-label-changed 34c97e87-f43f-47b3-9212-818b03cc1d30 9566317 0 2020-08-14 16:00:46 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-14 16:00:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 14 16:00:56.753: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3561 /api/v1/namespaces/watch-3561/configmaps/e2e-watch-test-label-changed 34c97e87-f43f-47b3-9212-818b03cc1d30 9566318 0 2020-08-14 16:00:46 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-14 16:00:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:00:56.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3561" for this suite.

• [SLOW TEST:10.315 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":264,"skipped":4526,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:00:56.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6724 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6724;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6724 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6724;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6724.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6724.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6724.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6724.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6724.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6724.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6724.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6724.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6724.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6724.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6724.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6724.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6724.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 136.209.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.209.136_udp@PTR;check="$$(dig +tcp +noall +answer +search 136.209.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.209.136_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6724 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6724;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6724 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6724;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6724.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6724.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6724.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6724.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6724.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6724.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6724.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6724.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6724.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6724.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6724.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6724.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6724.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 136.209.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.209.136_udp@PTR;check="$$(dig +tcp +noall +answer +search 136.209.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.209.136_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 14 16:01:12.167: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.171: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.174: INFO: Unable to read wheezy_udp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.176: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.179: INFO: Unable to read wheezy_udp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.183: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.186: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.353: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.473: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.477: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.480: INFO: Unable to read jessie_udp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.483: INFO: Unable to read jessie_tcp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.486: INFO: Unable to read jessie_udp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.488: INFO: Unable to read jessie_tcp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.491: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.494: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:12.517: INFO: Lookups using dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6724 wheezy_tcp@dns-test-service.dns-6724 wheezy_udp@dns-test-service.dns-6724.svc wheezy_tcp@dns-test-service.dns-6724.svc wheezy_udp@_http._tcp.dns-test-service.dns-6724.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6724.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6724 jessie_tcp@dns-test-service.dns-6724 jessie_udp@dns-test-service.dns-6724.svc jessie_tcp@dns-test-service.dns-6724.svc jessie_udp@_http._tcp.dns-test-service.dns-6724.svc jessie_tcp@_http._tcp.dns-test-service.dns-6724.svc]

Aug 14 16:01:17.525: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.531: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.537: INFO: Unable to read wheezy_udp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.541: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.544: INFO: Unable to read wheezy_udp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.547: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.552: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.557: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.578: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.582: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.585: INFO: Unable to read jessie_udp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.589: INFO: Unable to read jessie_tcp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.593: INFO: Unable to read jessie_udp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.597: INFO: Unable to read jessie_tcp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.602: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.606: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:17.633: INFO: Lookups using dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6724 wheezy_tcp@dns-test-service.dns-6724 wheezy_udp@dns-test-service.dns-6724.svc wheezy_tcp@dns-test-service.dns-6724.svc wheezy_udp@_http._tcp.dns-test-service.dns-6724.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6724.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6724 jessie_tcp@dns-test-service.dns-6724 jessie_udp@dns-test-service.dns-6724.svc jessie_tcp@dns-test-service.dns-6724.svc jessie_udp@_http._tcp.dns-test-service.dns-6724.svc jessie_tcp@_http._tcp.dns-test-service.dns-6724.svc]

Aug 14 16:01:22.525: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.530: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.535: INFO: Unable to read wheezy_udp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.541: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.546: INFO: Unable to read wheezy_udp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.551: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.555: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.559: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.589: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.594: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.598: INFO: Unable to read jessie_udp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.602: INFO: Unable to read jessie_tcp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.606: INFO: Unable to read jessie_udp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.610: INFO: Unable to read jessie_tcp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.615: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.619: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:22.643: INFO: Lookups using dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6724 wheezy_tcp@dns-test-service.dns-6724 wheezy_udp@dns-test-service.dns-6724.svc wheezy_tcp@dns-test-service.dns-6724.svc wheezy_udp@_http._tcp.dns-test-service.dns-6724.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6724.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6724 jessie_tcp@dns-test-service.dns-6724 jessie_udp@dns-test-service.dns-6724.svc jessie_tcp@dns-test-service.dns-6724.svc jessie_udp@_http._tcp.dns-test-service.dns-6724.svc jessie_tcp@_http._tcp.dns-test-service.dns-6724.svc]

Aug 14 16:01:27.525: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.531: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.535: INFO: Unable to read wheezy_udp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.539: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.543: INFO: Unable to read wheezy_udp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.548: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.554: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.558: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.581: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.585: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.588: INFO: Unable to read jessie_udp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.593: INFO: Unable to read jessie_tcp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.597: INFO: Unable to read jessie_udp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.601: INFO: Unable to read jessie_tcp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.605: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.609: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:27.634: INFO: Lookups using dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6724 wheezy_tcp@dns-test-service.dns-6724 wheezy_udp@dns-test-service.dns-6724.svc wheezy_tcp@dns-test-service.dns-6724.svc wheezy_udp@_http._tcp.dns-test-service.dns-6724.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6724.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6724 jessie_tcp@dns-test-service.dns-6724 jessie_udp@dns-test-service.dns-6724.svc jessie_tcp@dns-test-service.dns-6724.svc jessie_udp@_http._tcp.dns-test-service.dns-6724.svc jessie_tcp@_http._tcp.dns-test-service.dns-6724.svc]

Aug 14 16:01:32.541: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.546: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.551: INFO: Unable to read wheezy_udp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.556: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.561: INFO: Unable to read wheezy_udp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.566: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.570: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.575: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.625: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.630: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.636: INFO: Unable to read jessie_udp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.641: INFO: Unable to read jessie_tcp@dns-test-service.dns-6724 from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.645: INFO: Unable to read jessie_udp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.649: INFO: Unable to read jessie_tcp@dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.653: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.656: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6724.svc from pod dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504: the server could not find the requested resource (get pods dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504)
Aug 14 16:01:32.682: INFO: Lookups using dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6724 wheezy_tcp@dns-test-service.dns-6724 wheezy_udp@dns-test-service.dns-6724.svc wheezy_tcp@dns-test-service.dns-6724.svc wheezy_udp@_http._tcp.dns-test-service.dns-6724.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6724.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6724 jessie_tcp@dns-test-service.dns-6724 jessie_udp@dns-test-service.dns-6724.svc jessie_tcp@dns-test-service.dns-6724.svc jessie_udp@_http._tcp.dns-test-service.dns-6724.svc jessie_tcp@_http._tcp.dns-test-service.dns-6724.svc]

Aug 14 16:01:37.641: INFO: DNS probes using dns-6724/dns-test-1c10e15a-15ed-40e4-bb9d-e81857d3a504 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:01:38.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6724" for this suite.

• [SLOW TEST:41.936 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":265,"skipped":4533,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:01:38.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-2623
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-2623
I0814 16:01:39.105663      10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2623, replica count: 2
I0814 16:01:42.157279      10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0814 16:01:45.158185      10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0814 16:01:48.158850      10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 14 16:01:48.159: INFO: Creating new exec pod
Aug 14 16:01:53.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2623 execpodkblv5 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 14 16:01:54.653: INFO: stderr: "I0814 16:01:54.548582    3945 log.go:172] (0x4000546000) (0x4000bf60a0) Create stream\nI0814 16:01:54.552348    3945 log.go:172] (0x4000546000) (0x4000bf60a0) Stream added, broadcasting: 1\nI0814 16:01:54.562867    3945 log.go:172] (0x4000546000) Reply frame received for 1\nI0814 16:01:54.564322    3945 log.go:172] (0x4000546000) (0x4000a0c000) Create stream\nI0814 16:01:54.564448    3945 log.go:172] (0x4000546000) (0x4000a0c000) Stream added, broadcasting: 3\nI0814 16:01:54.566683    3945 log.go:172] (0x4000546000) Reply frame received for 3\nI0814 16:01:54.566993    3945 log.go:172] (0x4000546000) (0x4000784000) Create stream\nI0814 16:01:54.567069    3945 log.go:172] (0x4000546000) (0x4000784000) Stream added, broadcasting: 5\nI0814 16:01:54.568657    3945 log.go:172] (0x4000546000) Reply frame received for 5\nI0814 16:01:54.628206    3945 log.go:172] (0x4000546000) Data frame received for 5\nI0814 16:01:54.628960    3945 log.go:172] (0x4000546000) Data frame received for 3\nI0814 16:01:54.629153    3945 log.go:172] (0x4000a0c000) (3) Data frame handling\nI0814 16:01:54.629240    3945 log.go:172] (0x4000784000) (5) Data frame handling\nI0814 16:01:54.629573    3945 log.go:172] (0x4000546000) Data frame received for 1\nI0814 16:01:54.629726    3945 log.go:172] (0x4000bf60a0) (1) Data frame handling\nI0814 16:01:54.632654    3945 log.go:172] (0x4000784000) (5) Data frame sent\nI0814 16:01:54.633047    3945 log.go:172] (0x4000bf60a0) (1) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0814 16:01:54.633728    3945 log.go:172] (0x4000546000) Data frame received for 5\nI0814 16:01:54.633809    3945 log.go:172] (0x4000784000) (5) Data frame handling\nI0814 16:01:54.633895    3945 log.go:172] (0x4000784000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0814 16:01:54.633978    3945 log.go:172] (0x4000546000) Data frame received for 5\nI0814 16:01:54.634058    3945 log.go:172] (0x4000784000) (5) Data frame handling\nI0814 16:01:54.634699    3945 log.go:172] (0x4000546000) (0x4000bf60a0) Stream removed, broadcasting: 1\nI0814 16:01:54.637682    3945 log.go:172] (0x4000546000) Go away received\nI0814 16:01:54.642017    3945 log.go:172] (0x4000546000) (0x4000bf60a0) Stream removed, broadcasting: 1\nI0814 16:01:54.642340    3945 log.go:172] (0x4000546000) (0x4000a0c000) Stream removed, broadcasting: 3\nI0814 16:01:54.642564    3945 log.go:172] (0x4000546000) (0x4000784000) Stream removed, broadcasting: 5\n"
Aug 14 16:01:54.654: INFO: stdout: ""
Aug 14 16:01:54.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2623 execpodkblv5 -- /bin/sh -x -c nc -zv -t -w 2 10.109.190.43 80'
Aug 14 16:01:56.136: INFO: stderr: "I0814 16:01:56.008582    3969 log.go:172] (0x4000964000) (0x40009d8000) Create stream\nI0814 16:01:56.011109    3969 log.go:172] (0x4000964000) (0x40009d8000) Stream added, broadcasting: 1\nI0814 16:01:56.023594    3969 log.go:172] (0x4000964000) Reply frame received for 1\nI0814 16:01:56.025029    3969 log.go:172] (0x4000964000) (0x40009d80a0) Create stream\nI0814 16:01:56.025158    3969 log.go:172] (0x4000964000) (0x40009d80a0) Stream added, broadcasting: 3\nI0814 16:01:56.026948    3969 log.go:172] (0x4000964000) Reply frame received for 3\nI0814 16:01:56.027334    3969 log.go:172] (0x4000964000) (0x40009d8140) Create stream\nI0814 16:01:56.027445    3969 log.go:172] (0x4000964000) (0x40009d8140) Stream added, broadcasting: 5\nI0814 16:01:56.029055    3969 log.go:172] (0x4000964000) Reply frame received for 5\nI0814 16:01:56.113977    3969 log.go:172] (0x4000964000) Data frame received for 3\nI0814 16:01:56.114548    3969 log.go:172] (0x4000964000) Data frame received for 5\nI0814 16:01:56.114802    3969 log.go:172] (0x4000964000) Data frame received for 1\nI0814 16:01:56.115078    3969 log.go:172] (0x40009d8000) (1) Data frame handling\nI0814 16:01:56.115255    3969 log.go:172] (0x40009d8140) (5) Data frame handling\nI0814 16:01:56.115581    3969 log.go:172] (0x40009d80a0) (3) Data frame handling\n+ nc -zv -t -w 2 10.109.190.43 80\nConnection to 10.109.190.43 80 port [tcp/http] succeeded!\nI0814 16:01:56.117717    3969 log.go:172] (0x40009d8000) (1) Data frame sent\nI0814 16:01:56.117996    3969 log.go:172] (0x40009d8140) (5) Data frame sent\nI0814 16:01:56.118092    3969 log.go:172] (0x4000964000) Data frame received for 5\nI0814 16:01:56.118147    3969 log.go:172] (0x40009d8140) (5) Data frame handling\nI0814 16:01:56.118958    3969 log.go:172] (0x4000964000) (0x40009d8000) Stream removed, broadcasting: 1\nI0814 16:01:56.122500    3969 log.go:172] (0x4000964000) Go away received\nI0814 16:01:56.125356    3969 log.go:172] (0x4000964000) (0x40009d8000) Stream removed, broadcasting: 1\nI0814 16:01:56.125653    3969 log.go:172] (0x4000964000) (0x40009d80a0) Stream removed, broadcasting: 3\nI0814 16:01:56.125927    3969 log.go:172] (0x4000964000) (0x40009d8140) Stream removed, broadcasting: 5\n"
Aug 14 16:01:56.137: INFO: stdout: ""
Aug 14 16:01:56.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2623 execpodkblv5 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30704'
Aug 14 16:01:57.627: INFO: stderr: "I0814 16:01:57.498738    3992 log.go:172] (0x40000ec420) (0x40007f9720) Create stream\nI0814 16:01:57.501488    3992 log.go:172] (0x40000ec420) (0x40007f9720) Stream added, broadcasting: 1\nI0814 16:01:57.515362    3992 log.go:172] (0x40000ec420) Reply frame received for 1\nI0814 16:01:57.516021    3992 log.go:172] (0x40000ec420) (0x40007f97c0) Create stream\nI0814 16:01:57.516093    3992 log.go:172] (0x40000ec420) (0x40007f97c0) Stream added, broadcasting: 3\nI0814 16:01:57.517857    3992 log.go:172] (0x40000ec420) Reply frame received for 3\nI0814 16:01:57.518126    3992 log.go:172] (0x40000ec420) (0x40007f9860) Create stream\nI0814 16:01:57.518189    3992 log.go:172] (0x40000ec420) (0x40007f9860) Stream added, broadcasting: 5\nI0814 16:01:57.519548    3992 log.go:172] (0x40000ec420) Reply frame received for 5\nI0814 16:01:57.604512    3992 log.go:172] (0x40000ec420) Data frame received for 5\nI0814 16:01:57.605123    3992 log.go:172] (0x40000ec420) Data frame received for 3\nI0814 16:01:57.605293    3992 log.go:172] (0x40007f9860) (5) Data frame handling\nI0814 16:01:57.605523    3992 log.go:172] (0x40007f97c0) (3) Data frame handling\nI0814 16:01:57.605744    3992 log.go:172] (0x40000ec420) Data frame received for 1\nI0814 16:01:57.605850    3992 log.go:172] (0x40007f9720) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30704\nConnection to 172.18.0.13 30704 port [tcp/30704] succeeded!\nI0814 16:01:57.607466    3992 log.go:172] (0x40007f9720) (1) Data frame sent\nI0814 16:01:57.608175    3992 log.go:172] (0x40007f9860) (5) Data frame sent\nI0814 16:01:57.608287    3992 log.go:172] (0x40000ec420) Data frame received for 5\nI0814 16:01:57.608373    3992 log.go:172] (0x40007f9860) (5) Data frame handling\nI0814 16:01:57.610805    3992 log.go:172] (0x40000ec420) (0x40007f9720) Stream removed, broadcasting: 1\nI0814 16:01:57.612935    3992 log.go:172] (0x40000ec420) Go away received\nI0814 16:01:57.616685    3992 log.go:172] (0x40000ec420) (0x40007f9720) Stream removed, broadcasting: 1\nI0814 16:01:57.617596    3992 log.go:172] (0x40000ec420) (0x40007f97c0) Stream removed, broadcasting: 3\nI0814 16:01:57.618067    3992 log.go:172] (0x40000ec420) (0x40007f9860) Stream removed, broadcasting: 5\n"
Aug 14 16:01:57.628: INFO: stdout: ""
Aug 14 16:01:57.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2623 execpodkblv5 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30704'
Aug 14 16:01:59.107: INFO: stderr: "I0814 16:01:58.981025    4015 log.go:172] (0x400003a4d0) (0x40009f2000) Create stream\nI0814 16:01:58.983998    4015 log.go:172] (0x400003a4d0) (0x40009f2000) Stream added, broadcasting: 1\nI0814 16:01:59.000522    4015 log.go:172] (0x400003a4d0) Reply frame received for 1\nI0814 16:01:59.001808    4015 log.go:172] (0x400003a4d0) (0x4000813400) Create stream\nI0814 16:01:59.001940    4015 log.go:172] (0x400003a4d0) (0x4000813400) Stream added, broadcasting: 3\nI0814 16:01:59.003348    4015 log.go:172] (0x400003a4d0) Reply frame received for 3\nI0814 16:01:59.003609    4015 log.go:172] (0x400003a4d0) (0x40008135e0) Create stream\nI0814 16:01:59.003692    4015 log.go:172] (0x400003a4d0) (0x40008135e0) Stream added, broadcasting: 5\nI0814 16:01:59.005119    4015 log.go:172] (0x400003a4d0) Reply frame received for 5\nI0814 16:01:59.085502    4015 log.go:172] (0x400003a4d0) Data frame received for 3\nI0814 16:01:59.085882    4015 log.go:172] (0x4000813400) (3) Data frame handling\nI0814 16:01:59.087583    4015 log.go:172] (0x400003a4d0) Data frame received for 5\nI0814 16:01:59.087696    4015 log.go:172] (0x40008135e0) (5) Data frame handling\nI0814 16:01:59.088651    4015 log.go:172] (0x400003a4d0) Data frame received for 1\nI0814 16:01:59.088890    4015 log.go:172] (0x40009f2000) (1) Data frame handling\nI0814 16:01:59.089341    4015 log.go:172] (0x40009f2000) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 30704\nConnection to 172.18.0.15 30704 port [tcp/30704] succeeded!\nI0814 16:01:59.089908    4015 log.go:172] (0x40008135e0) (5) Data frame sent\nI0814 16:01:59.090376    4015 log.go:172] (0x400003a4d0) Data frame received for 5\nI0814 16:01:59.090501    4015 log.go:172] (0x40008135e0) (5) Data frame handling\nI0814 16:01:59.092576    4015 log.go:172] (0x400003a4d0) (0x40009f2000) Stream removed, broadcasting: 1\nI0814 16:01:59.094243    4015 log.go:172] (0x400003a4d0) Go away received\nI0814 16:01:59.097734    4015 log.go:172] (0x400003a4d0) (0x40009f2000) Stream removed, broadcasting: 1\nI0814 16:01:59.098038    4015 log.go:172] (0x400003a4d0) (0x4000813400) Stream removed, broadcasting: 3\nI0814 16:01:59.098248    4015 log.go:172] (0x400003a4d0) (0x40008135e0) Stream removed, broadcasting: 5\n"
Aug 14 16:01:59.108: INFO: stdout: ""
Aug 14 16:01:59.109: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:01:59.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2623" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:20.465 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":266,"skipped":4559,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:01:59.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 14 16:02:09.393: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 16:02:09.450: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 16:02:11.450: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 16:02:11.511: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 16:02:13.451: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 16:02:13.458: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 16:02:15.451: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 16:02:15.459: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 16:02:17.451: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 16:02:17.459: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 16:02:19.451: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 16:02:19.458: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 16:02:21.451: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 16:02:21.459: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 14 16:02:23.451: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 14 16:02:23.457: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:02:23.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1572" for this suite.

• [SLOW TEST:24.304 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4565,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:02:23.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 14 16:02:23.627: INFO: Waiting up to 5m0s for pod "pod-3ccc3379-a43c-4c54-92db-95f0bc665850" in namespace "emptydir-4171" to be "Succeeded or Failed"
Aug 14 16:02:23.684: INFO: Pod "pod-3ccc3379-a43c-4c54-92db-95f0bc665850": Phase="Pending", Reason="", readiness=false. Elapsed: 56.317171ms
Aug 14 16:02:25.743: INFO: Pod "pod-3ccc3379-a43c-4c54-92db-95f0bc665850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115736418s
Aug 14 16:02:27.747: INFO: Pod "pod-3ccc3379-a43c-4c54-92db-95f0bc665850": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119469327s
STEP: Saw pod success
Aug 14 16:02:27.747: INFO: Pod "pod-3ccc3379-a43c-4c54-92db-95f0bc665850" satisfied condition "Succeeded or Failed"
Aug 14 16:02:27.758: INFO: Trying to get logs from node kali-worker pod pod-3ccc3379-a43c-4c54-92db-95f0bc665850 container test-container: 
STEP: delete the pod
Aug 14 16:02:27.886: INFO: Waiting for pod pod-3ccc3379-a43c-4c54-92db-95f0bc665850 to disappear
Aug 14 16:02:27.932: INFO: Pod pod-3ccc3379-a43c-4c54-92db-95f0bc665850 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:02:27.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4171" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4600,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:02:28.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 14 16:02:36.245: INFO: 10 pods remaining
Aug 14 16:02:36.245: INFO: 10 pods has nil DeletionTimestamp
Aug 14 16:02:36.245: INFO: 
Aug 14 16:02:36.783: INFO: 10 pods remaining
Aug 14 16:02:36.783: INFO: 9 pods has nil DeletionTimestamp
Aug 14 16:02:36.783: INFO: 
Aug 14 16:02:38.894: INFO: 0 pods remaining
Aug 14 16:02:38.894: INFO: 0 pods has nil DeletionTimestamp
Aug 14 16:02:38.895: INFO: 
Aug 14 16:02:40.648: INFO: 0 pods remaining
Aug 14 16:02:40.648: INFO: 0 pods has nil DeletionTimestamp
Aug 14 16:02:40.648: INFO: 
STEP: Gathering metrics
W0814 16:02:42.172874      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 14 16:02:42.173: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:02:42.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8466" for this suite.

• [SLOW TEST:14.735 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":269,"skipped":4631,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:02:42.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-5746
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 14 16:02:44.642: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 14 16:02:45.245: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 14 16:02:47.251: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 14 16:02:49.251: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 14 16:02:51.252: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 14 16:02:53.252: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 14 16:02:55.251: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 14 16:02:57.251: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 14 16:02:59.251: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 14 16:03:01.253: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 14 16:03:01.278: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 14 16:03:05.369: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.220 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5746 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 16:03:05.369: INFO: >>> kubeConfig: /root/.kube/config
I0814 16:03:05.425826      10 log.go:172] (0x400281c420) (0x4002754c80) Create stream
I0814 16:03:05.425976      10 log.go:172] (0x400281c420) (0x4002754c80) Stream added, broadcasting: 1
I0814 16:03:05.429638      10 log.go:172] (0x400281c420) Reply frame received for 1
I0814 16:03:05.429788      10 log.go:172] (0x400281c420) (0x4002754d20) Create stream
I0814 16:03:05.429877      10 log.go:172] (0x400281c420) (0x4002754d20) Stream added, broadcasting: 3
I0814 16:03:05.431143      10 log.go:172] (0x400281c420) Reply frame received for 3
I0814 16:03:05.431291      10 log.go:172] (0x400281c420) (0x4002754dc0) Create stream
I0814 16:03:05.431373      10 log.go:172] (0x400281c420) (0x4002754dc0) Stream added, broadcasting: 5
I0814 16:03:05.432676      10 log.go:172] (0x400281c420) Reply frame received for 5
I0814 16:03:06.498577      10 log.go:172] (0x400281c420) Data frame received for 3
I0814 16:03:06.498816      10 log.go:172] (0x4002754d20) (3) Data frame handling
I0814 16:03:06.499068      10 log.go:172] (0x400281c420) Data frame received for 5
I0814 16:03:06.499361      10 log.go:172] (0x4002754dc0) (5) Data frame handling
I0814 16:03:06.499564      10 log.go:172] (0x4002754d20) (3) Data frame sent
I0814 16:03:06.499685      10 log.go:172] (0x400281c420) Data frame received for 3
I0814 16:03:06.499788      10 log.go:172] (0x4002754d20) (3) Data frame handling
I0814 16:03:06.500671      10 log.go:172] (0x400281c420) Data frame received for 1
I0814 16:03:06.500887      10 log.go:172] (0x4002754c80) (1) Data frame handling
I0814 16:03:06.500999      10 log.go:172] (0x4002754c80) (1) Data frame sent
I0814 16:03:06.501121      10 log.go:172] (0x400281c420) (0x4002754c80) Stream removed, broadcasting: 1
I0814 16:03:06.501249      10 log.go:172] (0x400281c420) Go away received
I0814 16:03:06.501760      10 log.go:172] (0x400281c420) (0x4002754c80) Stream removed, broadcasting: 1
I0814 16:03:06.501933      10 log.go:172] (0x400281c420) (0x4002754d20) Stream removed, broadcasting: 3
I0814 16:03:06.502093      10 log.go:172] (0x400281c420) (0x4002754dc0) Stream removed, broadcasting: 5
Aug 14 16:03:06.502: INFO: Found all expected endpoints: [netserver-0]
Aug 14 16:03:06.507: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.121 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5746 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 16:03:06.508: INFO: >>> kubeConfig: /root/.kube/config
I0814 16:03:06.567332      10 log.go:172] (0x4002c32580) (0x4000db63c0) Create stream
I0814 16:03:06.567468      10 log.go:172] (0x4002c32580) (0x4000db63c0) Stream added, broadcasting: 1
I0814 16:03:06.571305      10 log.go:172] (0x4002c32580) Reply frame received for 1
I0814 16:03:06.571530      10 log.go:172] (0x4002c32580) (0x40026e8e60) Create stream
I0814 16:03:06.571661      10 log.go:172] (0x4002c32580) (0x40026e8e60) Stream added, broadcasting: 3
I0814 16:03:06.574229      10 log.go:172] (0x4002c32580) Reply frame received for 3
I0814 16:03:06.574377      10 log.go:172] (0x4002c32580) (0x40026e8f00) Create stream
I0814 16:03:06.574460      10 log.go:172] (0x4002c32580) (0x40026e8f00) Stream added, broadcasting: 5
I0814 16:03:06.576456      10 log.go:172] (0x4002c32580) Reply frame received for 5
I0814 16:03:07.642702      10 log.go:172] (0x4002c32580) Data frame received for 3
I0814 16:03:07.642951      10 log.go:172] (0x40026e8e60) (3) Data frame handling
I0814 16:03:07.643135      10 log.go:172] (0x4002c32580) Data frame received for 5
I0814 16:03:07.643352      10 log.go:172] (0x40026e8f00) (5) Data frame handling
I0814 16:03:07.643565      10 log.go:172] (0x40026e8e60) (3) Data frame sent
I0814 16:03:07.643704      10 log.go:172] (0x4002c32580) Data frame received for 3
I0814 16:03:07.643806      10 log.go:172] (0x40026e8e60) (3) Data frame handling
I0814 16:03:07.644930      10 log.go:172] (0x4002c32580) Data frame received for 1
I0814 16:03:07.645109      10 log.go:172] (0x4000db63c0) (1) Data frame handling
I0814 16:03:07.645286      10 log.go:172] (0x4000db63c0) (1) Data frame sent
I0814 16:03:07.645434      10 log.go:172] (0x4002c32580) (0x4000db63c0) Stream removed, broadcasting: 1
I0814 16:03:07.645602      10 log.go:172] (0x4002c32580) Go away received
I0814 16:03:07.646493      10 log.go:172] (0x4002c32580) (0x4000db63c0) Stream removed, broadcasting: 1
I0814 16:03:07.646609      10 log.go:172] (0x4002c32580) (0x40026e8e60) Stream removed, broadcasting: 3
I0814 16:03:07.646739      10 log.go:172] (0x4002c32580) (0x40026e8f00) Stream removed, broadcasting: 5
Aug 14 16:03:07.646: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:03:07.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5746" for this suite.

• [SLOW TEST:25.401 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4635,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:03:08.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-v6t7
STEP: Creating a pod to test atomic-volume-subpath
Aug 14 16:03:08.656: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-v6t7" in namespace "subpath-8685" to be "Succeeded or Failed"
Aug 14 16:03:08.697: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Pending", Reason="", readiness=false. Elapsed: 41.22968ms
Aug 14 16:03:10.816: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159623195s
Aug 14 16:03:13.139: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483111532s
Aug 14 16:03:15.168: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.512135361s
Aug 14 16:03:17.247: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Running", Reason="", readiness=true. Elapsed: 8.590578645s
Aug 14 16:03:19.990: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Running", Reason="", readiness=true. Elapsed: 11.334107389s
Aug 14 16:03:22.321: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Running", Reason="", readiness=true. Elapsed: 13.665175233s
Aug 14 16:03:24.366: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Running", Reason="", readiness=true. Elapsed: 15.710341472s
Aug 14 16:03:26.374: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Running", Reason="", readiness=true. Elapsed: 17.718135994s
Aug 14 16:03:28.381: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Running", Reason="", readiness=true. Elapsed: 19.724543619s
Aug 14 16:03:30.386: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Running", Reason="", readiness=true. Elapsed: 21.730170986s
Aug 14 16:03:32.475: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Running", Reason="", readiness=true. Elapsed: 23.819024399s
Aug 14 16:03:34.781: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Running", Reason="", readiness=true. Elapsed: 26.125147556s
Aug 14 16:03:36.806: INFO: Pod "pod-subpath-test-secret-v6t7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.149664674s
STEP: Saw pod success
Aug 14 16:03:36.806: INFO: Pod "pod-subpath-test-secret-v6t7" satisfied condition "Succeeded or Failed"
Aug 14 16:03:37.235: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-v6t7 container test-container-subpath-secret-v6t7: 
STEP: delete the pod
Aug 14 16:03:37.595: INFO: Waiting for pod pod-subpath-test-secret-v6t7 to disappear
Aug 14 16:03:37.679: INFO: Pod pod-subpath-test-secret-v6t7 no longer exists
STEP: Deleting pod pod-subpath-test-secret-v6t7
Aug 14 16:03:37.679: INFO: Deleting pod "pod-subpath-test-secret-v6t7" in namespace "subpath-8685"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:03:37.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8685" for this suite.

• [SLOW TEST:29.796 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":271,"skipped":4637,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:03:38.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 14 16:03:40.455: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:03:58.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4021" for this suite.

• [SLOW TEST:20.782 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":272,"skipped":4664,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:03:58.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:03:59.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1870" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":273,"skipped":4686,"failed":0}
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:03:59.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
Aug 14 16:03:59.998: INFO: Waiting up to 5m0s for pod "var-expansion-a65039b5-a5bf-4939-b28c-17fa47e0e8dc" in namespace "var-expansion-7224" to be "Succeeded or Failed"
Aug 14 16:04:00.274: INFO: Pod "var-expansion-a65039b5-a5bf-4939-b28c-17fa47e0e8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 275.75799ms
Aug 14 16:04:02.280: INFO: Pod "var-expansion-a65039b5-a5bf-4939-b28c-17fa47e0e8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281346904s
Aug 14 16:04:04.290: INFO: Pod "var-expansion-a65039b5-a5bf-4939-b28c-17fa47e0e8dc": Phase="Running", Reason="", readiness=true. Elapsed: 4.291309078s
Aug 14 16:04:06.294: INFO: Pod "var-expansion-a65039b5-a5bf-4939-b28c-17fa47e0e8dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.295462854s
STEP: Saw pod success
Aug 14 16:04:06.294: INFO: Pod "var-expansion-a65039b5-a5bf-4939-b28c-17fa47e0e8dc" satisfied condition "Succeeded or Failed"
Aug 14 16:04:06.298: INFO: Trying to get logs from node kali-worker pod var-expansion-a65039b5-a5bf-4939-b28c-17fa47e0e8dc container dapi-container: 
STEP: delete the pod
Aug 14 16:04:06.510: INFO: Waiting for pod var-expansion-a65039b5-a5bf-4939-b28c-17fa47e0e8dc to disappear
Aug 14 16:04:06.563: INFO: Pod var-expansion-a65039b5-a5bf-4939-b28c-17fa47e0e8dc no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:04:06.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7224" for this suite.

• [SLOW TEST:7.153 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4691,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 14 16:04:06.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1164
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-1164
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1164
Aug 14 16:04:07.282: INFO: Found 0 stateful pods, waiting for 1
Aug 14 16:04:17.288: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 14 16:04:17.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 14 16:04:18.803: INFO: stderr: "I0814 16:04:18.630752    4038 log.go:172] (0x40000ea370) (0x4000716000) Create stream\nI0814 16:04:18.634518    4038 log.go:172] (0x40000ea370) (0x4000716000) Stream added, broadcasting: 1\nI0814 16:04:18.647268    4038 log.go:172] (0x40000ea370) Reply frame received for 1\nI0814 16:04:18.648080    4038 log.go:172] (0x40000ea370) (0x40007160a0) Create stream\nI0814 16:04:18.648155    4038 log.go:172] (0x40000ea370) (0x40007160a0) Stream added, broadcasting: 3\nI0814 16:04:18.649850    4038 log.go:172] (0x40000ea370) Reply frame received for 3\nI0814 16:04:18.650349    4038 log.go:172] (0x40000ea370) (0x400076c000) Create stream\nI0814 16:04:18.650494    4038 log.go:172] (0x40000ea370) (0x400076c000) Stream added, broadcasting: 5\nI0814 16:04:18.651886    4038 log.go:172] (0x40000ea370) Reply frame received for 5\nI0814 16:04:18.730203    4038 log.go:172] (0x40000ea370) Data frame received for 5\nI0814 16:04:18.730407    4038 log.go:172] (0x400076c000) (5) Data frame handling\nI0814 16:04:18.730821    4038 log.go:172] (0x400076c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0814 16:04:18.784651    4038 log.go:172] (0x40000ea370) Data frame received for 3\nI0814 16:04:18.784806    4038 log.go:172] (0x40007160a0) (3) Data frame handling\nI0814 16:04:18.784874    4038 log.go:172] (0x40007160a0) (3) Data frame sent\nI0814 16:04:18.784934    4038 log.go:172] (0x40000ea370) Data frame received for 3\nI0814 16:04:18.784983    4038 log.go:172] (0x40007160a0) (3) Data frame handling\nI0814 16:04:18.785187    4038 log.go:172] (0x40000ea370) Data frame received for 5\nI0814 16:04:18.785292    4038 log.go:172] (0x400076c000) (5) Data frame handling\nI0814 16:04:18.786202    4038 log.go:172] (0x40000ea370) Data frame received for 1\nI0814 16:04:18.786367    4038 log.go:172] (0x4000716000) (1) Data frame handling\nI0814 16:04:18.786527    4038 log.go:172] (0x4000716000) (1) Data frame sent\nI0814 16:04:18.790394    4038 log.go:172] (0x40000ea370) (0x4000716000) Stream removed, broadcasting: 1\nI0814 16:04:18.791263    4038 log.go:172] (0x40000ea370) Go away received\nI0814 16:04:18.794387    4038 log.go:172] (0x40000ea370) (0x4000716000) Stream removed, broadcasting: 1\nI0814 16:04:18.794641    4038 log.go:172] (0x40000ea370) (0x40007160a0) Stream removed, broadcasting: 3\nI0814 16:04:18.794818    4038 log.go:172] (0x40000ea370) (0x400076c000) Stream removed, broadcasting: 5\n"
Aug 14 16:04:18.804: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 14 16:04:18.804: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 14 16:04:18.870: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 14 16:04:28.906: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 16:04:28.906: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 16:04:28.932: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
Aug 14 16:04:28.933: INFO: ss-0  kali-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  }]
Aug 14 16:04:28.933: INFO: 
Aug 14 16:04:28.933: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 14 16:04:29.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988135891s
Aug 14 16:04:31.005: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98075038s
Aug 14 16:04:32.015: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.916647385s
Aug 14 16:04:33.068: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.906112829s
Aug 14 16:04:34.076: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.853029168s
Aug 14 16:04:35.095: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.845853733s
Aug 14 16:04:36.102: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.8266039s
Aug 14 16:04:37.112: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.819007603s
Aug 14 16:04:38.120: INFO: Verifying statefulset ss doesn't scale past 3 for another 809.791362ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1164
Aug 14 16:04:39.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:04:40.517: INFO: stderr: "I0814 16:04:40.422913    4061 log.go:172] (0x4000a48000) (0x40009d0000) Create stream\nI0814 16:04:40.427360    4061 log.go:172] (0x4000a48000) (0x40009d0000) Stream added, broadcasting: 1\nI0814 16:04:40.441549    4061 log.go:172] (0x4000a48000) Reply frame received for 1\nI0814 16:04:40.442802    4061 log.go:172] (0x4000a48000) (0x40007f6140) Create stream\nI0814 16:04:40.442922    4061 log.go:172] (0x4000a48000) (0x40007f6140) Stream added, broadcasting: 3\nI0814 16:04:40.444473    4061 log.go:172] (0x4000a48000) Reply frame received for 3\nI0814 16:04:40.444841    4061 log.go:172] (0x4000a48000) (0x40009d00a0) Create stream\nI0814 16:04:40.444913    4061 log.go:172] (0x4000a48000) (0x40009d00a0) Stream added, broadcasting: 5\nI0814 16:04:40.446308    4061 log.go:172] (0x4000a48000) Reply frame received for 5\nI0814 16:04:40.499631    4061 log.go:172] (0x4000a48000) Data frame received for 3\nI0814 16:04:40.499937    4061 log.go:172] (0x4000a48000) Data frame received for 1\nI0814 16:04:40.500061    4061 log.go:172] (0x40007f6140) (3) Data frame handling\nI0814 16:04:40.500284    4061 log.go:172] (0x4000a48000) Data frame received for 5\nI0814 16:04:40.500388    4061 log.go:172] (0x40009d00a0) (5) Data frame handling\nI0814 16:04:40.500553    4061 log.go:172] (0x40009d0000) (1) Data frame handling\nI0814 16:04:40.501004    4061 log.go:172] (0x40009d0000) (1) Data frame sent\nI0814 16:04:40.501203    4061 log.go:172] (0x40007f6140) (3) Data frame sent\nI0814 16:04:40.501275    4061 log.go:172] (0x4000a48000) Data frame received for 3\nI0814 16:04:40.501338    4061 log.go:172] (0x40007f6140) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0814 16:04:40.502284    4061 log.go:172] (0x40009d00a0) (5) Data frame sent\nI0814 16:04:40.502368    4061 log.go:172] (0x4000a48000) Data frame received for 5\nI0814 16:04:40.502586    4061 log.go:172] (0x4000a48000) (0x40009d0000) Stream removed, broadcasting: 1\nI0814 16:04:40.503245    4061 log.go:172] (0x40009d00a0) (5) Data frame handling\nI0814 16:04:40.505930    4061 log.go:172] (0x4000a48000) Go away received\nI0814 16:04:40.508555    4061 log.go:172] (0x4000a48000) (0x40009d0000) Stream removed, broadcasting: 1\nI0814 16:04:40.508969    4061 log.go:172] (0x4000a48000) (0x40007f6140) Stream removed, broadcasting: 3\nI0814 16:04:40.509193    4061 log.go:172] (0x4000a48000) (0x40009d00a0) Stream removed, broadcasting: 5\n"
Aug 14 16:04:40.517: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 14 16:04:40.517: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 14 16:04:40.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:04:41.942: INFO: stderr: "I0814 16:04:41.857092    4085 log.go:172] (0x40000fe370) (0x400072c000) Create stream\nI0814 16:04:41.860857    4085 log.go:172] (0x40000fe370) (0x400072c000) Stream added, broadcasting: 1\nI0814 16:04:41.874742    4085 log.go:172] (0x40000fe370) Reply frame received for 1\nI0814 16:04:41.875233    4085 log.go:172] (0x40000fe370) (0x400072c0a0) Create stream\nI0814 16:04:41.875281    4085 log.go:172] (0x40000fe370) (0x400072c0a0) Stream added, broadcasting: 3\nI0814 16:04:41.876835    4085 log.go:172] (0x40000fe370) Reply frame received for 3\nI0814 16:04:41.877050    4085 log.go:172] (0x40000fe370) (0x400078c000) Create stream\nI0814 16:04:41.877167    4085 log.go:172] (0x40000fe370) (0x400078c000) Stream added, broadcasting: 5\nI0814 16:04:41.878272    4085 log.go:172] (0x40000fe370) Reply frame received for 5\nI0814 16:04:41.923657    4085 log.go:172] (0x40000fe370) Data frame received for 5\nI0814 16:04:41.923906    4085 log.go:172] (0x40000fe370) Data frame received for 1\nI0814 16:04:41.924105    4085 log.go:172] (0x40000fe370) Data frame received for 3\nI0814 16:04:41.924319    4085 log.go:172] (0x400072c0a0) (3) Data frame handling\nI0814 16:04:41.924445    4085 log.go:172] (0x400072c000) (1) Data frame handling\nI0814 16:04:41.924640    4085 log.go:172] (0x400078c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0814 16:04:41.926195    4085 log.go:172] (0x400078c000) (5) Data frame sent\nI0814 16:04:41.926425    4085 log.go:172] (0x400072c0a0) (3) Data frame sent\nI0814 16:04:41.927280    4085 log.go:172] (0x40000fe370) Data frame received for 3\nI0814 16:04:41.927354    4085 log.go:172] (0x400072c0a0) (3) Data frame handling\nI0814 16:04:41.927497    4085 log.go:172] (0x40000fe370) Data frame received for 5\nI0814 16:04:41.927600    4085 log.go:172] (0x400078c000) (5) Data frame handling\nI0814 16:04:41.928335    4085 log.go:172] (0x400072c000) (1) Data frame sent\nI0814 16:04:41.930496    4085 log.go:172] (0x40000fe370) (0x400072c000) Stream removed, broadcasting: 1\nI0814 16:04:41.931168    4085 log.go:172] (0x40000fe370) Go away received\nI0814 16:04:41.934072    4085 log.go:172] (0x40000fe370) (0x400072c000) Stream removed, broadcasting: 1\nI0814 16:04:41.934411    4085 log.go:172] (0x40000fe370) (0x400072c0a0) Stream removed, broadcasting: 3\nI0814 16:04:41.934637    4085 log.go:172] (0x40000fe370) (0x400078c000) Stream removed, broadcasting: 5\n"
Aug 14 16:04:41.943: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 14 16:04:41.943: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 14 16:04:41.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:04:43.428: INFO: stderr: "I0814 16:04:43.329282    4108 log.go:172] (0x400003a2c0) (0x4000bb80a0) Create stream\nI0814 16:04:43.332684    4108 log.go:172] (0x400003a2c0) (0x4000bb80a0) Stream added, broadcasting: 1\nI0814 16:04:43.344456    4108 log.go:172] (0x400003a2c0) Reply frame received for 1\nI0814 16:04:43.345137    4108 log.go:172] (0x400003a2c0) (0x4000bb8140) Create stream\nI0814 16:04:43.345208    4108 log.go:172] (0x400003a2c0) (0x4000bb8140) Stream added, broadcasting: 3\nI0814 16:04:43.347043    4108 log.go:172] (0x400003a2c0) Reply frame received for 3\nI0814 16:04:43.347590    4108 log.go:172] (0x400003a2c0) (0x4000a20000) Create stream\nI0814 16:04:43.347702    4108 log.go:172] (0x400003a2c0) (0x4000a20000) Stream added, broadcasting: 5\nI0814 16:04:43.349406    4108 log.go:172] (0x400003a2c0) Reply frame received for 5\nI0814 16:04:43.413789    4108 log.go:172] (0x400003a2c0) Data frame received for 3\nI0814 16:04:43.414194    4108 log.go:172] (0x4000bb8140) (3) Data frame handling\nI0814 16:04:43.414453    4108 log.go:172] (0x400003a2c0) Data frame received for 1\nI0814 16:04:43.414539    4108 log.go:172] (0x4000bb80a0) (1) Data frame handling\nI0814 16:04:43.414826    4108 log.go:172] (0x400003a2c0) Data frame received for 5\nI0814 16:04:43.414897    4108 log.go:172] (0x4000a20000) (5) Data frame handling\nI0814 16:04:43.415292    4108 log.go:172] (0x4000bb80a0) (1) Data frame sent\nI0814 16:04:43.415633    4108 log.go:172] (0x4000a20000) (5) Data frame sent\nI0814 16:04:43.415699    4108 log.go:172] (0x400003a2c0) Data frame received for 5\nI0814 16:04:43.415751    4108 log.go:172] (0x4000a20000) (5) Data frame handling\nI0814 16:04:43.415811    4108 log.go:172] (0x4000bb8140) (3) Data frame sent\nI0814 16:04:43.415881    4108 log.go:172] (0x400003a2c0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0814 16:04:43.418031    4108 log.go:172] (0x400003a2c0) (0x4000bb80a0) Stream removed, broadcasting: 1\nI0814 16:04:43.419226    4108 log.go:172] (0x4000bb8140) (3) Data frame handling\nI0814 16:04:43.421936    4108 log.go:172] (0x400003a2c0) Go away received\nI0814 16:04:43.422207    4108 log.go:172] (0x400003a2c0) (0x4000bb80a0) Stream removed, broadcasting: 1\nI0814 16:04:43.422930    4108 log.go:172] (0x400003a2c0) (0x4000bb8140) Stream removed, broadcasting: 3\nI0814 16:04:43.423280    4108 log.go:172] (0x400003a2c0) (0x4000a20000) Stream removed, broadcasting: 5\n"
Aug 14 16:04:43.429: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 14 16:04:43.429: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 14 16:04:43.435: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 16:04:43.435: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 14 16:04:43.435: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 14 16:04:43.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 14 16:04:44.867: INFO: stderr: "I0814 16:04:44.777199    4129 log.go:172] (0x40007fc2c0) (0x4000998000) Create stream\nI0814 16:04:44.781792    4129 log.go:172] (0x40007fc2c0) (0x4000998000) Stream added, broadcasting: 1\nI0814 16:04:44.791581    4129 log.go:172] (0x40007fc2c0) Reply frame received for 1\nI0814 16:04:44.792209    4129 log.go:172] (0x40007fc2c0) (0x40009980a0) Create stream\nI0814 16:04:44.792273    4129 log.go:172] (0x40007fc2c0) (0x40009980a0) Stream added, broadcasting: 3\nI0814 16:04:44.794096    4129 log.go:172] (0x40007fc2c0) Reply frame received for 3\nI0814 16:04:44.794889    4129 log.go:172] (0x40007fc2c0) (0x40008352c0) Create stream\nI0814 16:04:44.794990    4129 log.go:172] (0x40007fc2c0) (0x40008352c0) Stream added, broadcasting: 5\nI0814 16:04:44.796274    4129 log.go:172] (0x40007fc2c0) Reply frame received for 5\nI0814 16:04:44.848840    4129 log.go:172] (0x40007fc2c0) Data frame received for 5\nI0814 16:04:44.849149    4129 log.go:172] (0x40007fc2c0) Data frame received for 1\nI0814 16:04:44.849480    4129 log.go:172] (0x40007fc2c0) Data frame received for 3\nI0814 16:04:44.849583    4129 log.go:172] (0x40009980a0) (3) Data frame handling\nI0814 16:04:44.849723    4129 log.go:172] (0x40008352c0) (5) Data frame handling\nI0814 16:04:44.849892    4129 log.go:172] (0x4000998000) (1) Data frame handling\nI0814 16:04:44.851099    4129 log.go:172] (0x4000998000) (1) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0814 16:04:44.851640    4129 log.go:172] (0x40009980a0) (3) Data frame sent\nI0814 16:04:44.851792    4129 log.go:172] (0x40007fc2c0) Data frame received for 3\nI0814 16:04:44.851967    4129 log.go:172] (0x40008352c0) (5) Data frame sent\nI0814 16:04:44.852097    4129 log.go:172] (0x40007fc2c0) Data frame received for 5\nI0814 16:04:44.852462    4129 log.go:172] (0x40007fc2c0) (0x4000998000) Stream removed, broadcasting: 1\nI0814 16:04:44.853138    4129 log.go:172] (0x40009980a0) (3) Data frame handling\nI0814 16:04:44.853375    4129 log.go:172] (0x40008352c0) (5) Data frame handling\nI0814 16:04:44.855730    4129 log.go:172] (0x40007fc2c0) Go away received\nI0814 16:04:44.858890    4129 log.go:172] (0x40007fc2c0) (0x4000998000) Stream removed, broadcasting: 1\nI0814 16:04:44.859240    4129 log.go:172] (0x40007fc2c0) (0x40009980a0) Stream removed, broadcasting: 3\nI0814 16:04:44.859439    4129 log.go:172] (0x40007fc2c0) (0x40008352c0) Stream removed, broadcasting: 5\n"
Aug 14 16:04:44.868: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 14 16:04:44.868: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 14 16:04:44.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 14 16:04:46.358: INFO: stderr: "I0814 16:04:46.205617    4153 log.go:172] (0x400003a420) (0x4000728000) Create stream\nI0814 16:04:46.209545    4153 log.go:172] (0x400003a420) (0x4000728000) Stream added, broadcasting: 1\nI0814 16:04:46.224241    4153 log.go:172] (0x400003a420) Reply frame received for 1\nI0814 16:04:46.225602    4153 log.go:172] (0x400003a420) (0x400080d5e0) Create stream\nI0814 16:04:46.225714    4153 log.go:172] (0x400003a420) (0x400080d5e0) Stream added, broadcasting: 3\nI0814 16:04:46.227455    4153 log.go:172] (0x400003a420) Reply frame received for 3\nI0814 16:04:46.227936    4153 log.go:172] (0x400003a420) (0x4000758000) Create stream\nI0814 16:04:46.228042    4153 log.go:172] (0x400003a420) (0x4000758000) Stream added, broadcasting: 5\nI0814 16:04:46.229551    4153 log.go:172] (0x400003a420) Reply frame received for 5\nI0814 16:04:46.305797    4153 log.go:172] (0x400003a420) Data frame received for 5\nI0814 16:04:46.306021    4153 log.go:172] (0x4000758000) (5) Data frame handling\nI0814 16:04:46.306498    4153 log.go:172] (0x4000758000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0814 16:04:46.338980    4153 log.go:172] (0x400003a420) Data frame received for 3\nI0814 16:04:46.339043    4153 log.go:172] (0x400080d5e0) (3) Data frame handling\nI0814 16:04:46.339111    4153 log.go:172] (0x400080d5e0) (3) Data frame sent\nI0814 16:04:46.339157    4153 log.go:172] (0x400003a420) Data frame received for 3\nI0814 16:04:46.339212    4153 log.go:172] (0x400080d5e0) (3) Data frame handling\nI0814 16:04:46.339459    4153 log.go:172] (0x400003a420) Data frame received for 5\nI0814 16:04:46.339623    4153 log.go:172] (0x4000758000) (5) Data frame handling\nI0814 16:04:46.340602    4153 log.go:172] (0x400003a420) Data frame received for 1\nI0814 16:04:46.340708    4153 log.go:172] (0x4000728000) (1) Data frame handling\nI0814 16:04:46.340942    4153 log.go:172] (0x4000728000) (1) Data frame sent\nI0814 16:04:46.343295    4153 log.go:172] (0x400003a420) (0x4000728000) Stream removed, broadcasting: 1\nI0814 16:04:46.345179    4153 log.go:172] (0x400003a420) Go away received\nI0814 16:04:46.348655    4153 log.go:172] (0x400003a420) (0x4000728000) Stream removed, broadcasting: 1\nI0814 16:04:46.349086    4153 log.go:172] (0x400003a420) (0x400080d5e0) Stream removed, broadcasting: 3\nI0814 16:04:46.349317    4153 log.go:172] (0x400003a420) (0x4000758000) Stream removed, broadcasting: 5\n"
Aug 14 16:04:46.359: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 14 16:04:46.359: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 14 16:04:46.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 14 16:04:47.782: INFO: stderr: "I0814 16:04:47.664972    4176 log.go:172] (0x400056c000) (0x40007fd220) Create stream\nI0814 16:04:47.669738    4176 log.go:172] (0x400056c000) (0x40007fd220) Stream added, broadcasting: 1\nI0814 16:04:47.679756    4176 log.go:172] (0x400056c000) Reply frame received for 1\nI0814 16:04:47.680272    4176 log.go:172] (0x400056c000) (0x40007fa000) Create stream\nI0814 16:04:47.680323    4176 log.go:172] (0x400056c000) (0x40007fa000) Stream added, broadcasting: 3\nI0814 16:04:47.682059    4176 log.go:172] (0x400056c000) Reply frame received for 3\nI0814 16:04:47.682617    4176 log.go:172] (0x400056c000) (0x40007fd400) Create stream\nI0814 16:04:47.682725    4176 log.go:172] (0x400056c000) (0x40007fd400) Stream added, broadcasting: 5\nI0814 16:04:47.684088    4176 log.go:172] (0x400056c000) Reply frame received for 5\nI0814 16:04:47.732256    4176 log.go:172] (0x400056c000) Data frame received for 5\nI0814 16:04:47.732511    4176 log.go:172] (0x40007fd400) (5) Data frame handling\nI0814 16:04:47.733136    4176 log.go:172] (0x40007fd400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0814 16:04:47.762236    4176 log.go:172] (0x400056c000) Data frame received for 3\nI0814 16:04:47.762450    4176 log.go:172] (0x40007fa000) (3) Data frame handling\nI0814 16:04:47.762589    4176 log.go:172] (0x40007fa000) (3) Data frame sent\nI0814 16:04:47.762712    4176 log.go:172] (0x400056c000) Data frame received for 3\nI0814 16:04:47.762815    4176 log.go:172] (0x40007fa000) (3) Data frame handling\nI0814 16:04:47.763023    4176 log.go:172] (0x400056c000) Data frame received for 5\nI0814 16:04:47.763171    4176 log.go:172] (0x40007fd400) (5) Data frame handling\nI0814 16:04:47.763906    4176 log.go:172] (0x400056c000) Data frame received for 1\nI0814 16:04:47.763992    4176 log.go:172] (0x40007fd220) (1) Data frame handling\nI0814 16:04:47.764106    4176 log.go:172] (0x40007fd220) (1) Data frame sent\nI0814 16:04:47.768208    4176 log.go:172] (0x400056c000) (0x40007fd220) Stream removed, broadcasting: 1\nI0814 16:04:47.770150    4176 log.go:172] (0x400056c000) Go away received\nI0814 16:04:47.774185    4176 log.go:172] (0x400056c000) (0x40007fd220) Stream removed, broadcasting: 1\nI0814 16:04:47.774489    4176 log.go:172] (0x400056c000) (0x40007fa000) Stream removed, broadcasting: 3\nI0814 16:04:47.774707    4176 log.go:172] (0x400056c000) (0x40007fd400) Stream removed, broadcasting: 5\n"
Aug 14 16:04:47.783: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 14 16:04:47.783: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 14 16:04:47.783: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 16:04:47.788: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 14 16:04:57.797: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 16:04:57.797: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 16:04:57.797: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 14 16:04:58.042: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 16:04:58.042: INFO: ss-0  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  }]
Aug 14 16:04:58.042: INFO: ss-1  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:04:58.042: INFO: ss-2  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:04:58.042: INFO: 
Aug 14 16:04:58.042: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 14 16:04:59.684: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 16:04:59.684: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  }]
Aug 14 16:04:59.684: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:04:59.684: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:04:59.684: INFO: 
Aug 14 16:04:59.684: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 14 16:05:01.083: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 16:05:01.083: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  }]
Aug 14 16:05:01.084: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:01.084: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:01.084: INFO: 
Aug 14 16:05:01.084: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 14 16:05:02.259: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 16:05:02.260: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  }]
Aug 14 16:05:02.260: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:02.260: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:02.260: INFO: 
Aug 14 16:05:02.260: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 14 16:05:03.267: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 16:05:03.267: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:07 +0000 UTC  }]
Aug 14 16:05:03.267: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:03.267: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:03.267: INFO: 
Aug 14 16:05:03.267: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 14 16:05:04.273: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 16:05:04.273: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:04.273: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:04.273: INFO: 
Aug 14 16:05:04.273: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 14 16:05:05.280: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 16:05:05.280: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:05.280: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:05.280: INFO: 
Aug 14 16:05:05.280: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 14 16:05:06.287: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 16:05:06.287: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:06.288: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:06.288: INFO: 
Aug 14 16:05:06.288: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 14 16:05:07.296: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 14 16:05:07.296: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:07.297: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-14 16:04:28 +0000 UTC  }]
Aug 14 16:05:07.297: INFO: 
Aug 14 16:05:07.297: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1164
Aug 14 16:05:08.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:05:09.625: INFO: rc: 1
Aug 14 16:05:09.625: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 14 16:05:19.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:05:20.826: INFO: rc: 1
Aug 14 16:05:20.826: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:05:30.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:05:32.043: INFO: rc: 1
Aug 14 16:05:32.044: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:05:42.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:05:43.274: INFO: rc: 1
Aug 14 16:05:43.274: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:05:53.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:05:54.742: INFO: rc: 1
Aug 14 16:05:54.742: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:06:04.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:06:05.986: INFO: rc: 1
Aug 14 16:06:05.986: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:06:15.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:06:17.240: INFO: rc: 1
Aug 14 16:06:17.240: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:06:27.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:06:31.858: INFO: rc: 1
Aug 14 16:06:31.858: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:06:41.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:06:43.985: INFO: rc: 1
Aug 14 16:06:43.985: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:06:53.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:06:55.216: INFO: rc: 1
Aug 14 16:06:55.216: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:07:05.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:07:06.441: INFO: rc: 1
Aug 14 16:07:06.441: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:07:16.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:07:17.661: INFO: rc: 1
Aug 14 16:07:17.662: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:07:27.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:07:28.992: INFO: rc: 1
Aug 14 16:07:28.993: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:07:38.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:07:40.247: INFO: rc: 1
Aug 14 16:07:40.248: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:07:50.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:07:51.543: INFO: rc: 1
Aug 14 16:07:51.544: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:08:01.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:08:02.794: INFO: rc: 1
Aug 14 16:08:02.794: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:08:12.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:08:14.089: INFO: rc: 1
Aug 14 16:08:14.089: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:08:24.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:08:25.323: INFO: rc: 1
Aug 14 16:08:25.323: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:08:35.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:08:36.600: INFO: rc: 1
Aug 14 16:08:36.601: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:08:46.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:08:47.981: INFO: rc: 1
Aug 14 16:08:47.981: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:08:57.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:08:59.425: INFO: rc: 1
Aug 14 16:08:59.426: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:09:09.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:09:10.780: INFO: rc: 1
Aug 14 16:09:10.781: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:09:20.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:09:21.989: INFO: rc: 1
Aug 14 16:09:21.989: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:09:31.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:09:33.603: INFO: rc: 1
Aug 14 16:09:33.603: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:09:43.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:09:44.861: INFO: rc: 1
Aug 14 16:09:44.861: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:09:54.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:09:56.077: INFO: rc: 1
Aug 14 16:09:56.077: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:10:06.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:10:07.359: INFO: rc: 1
Aug 14 16:10:07.359: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 14 16:10:17.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 14 16:10:18.610: INFO: rc: 1
Aug 14 16:10:18.611: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Aug 14 16:10:18.611: INFO: Scaling statefulset ss to 0
Aug 14 16:10:18.633: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 14 16:10:18.637: INFO: Deleting all statefulset in ns statefulset-1164
Aug 14 16:10:18.641: INFO: Scaling statefulset ss to 0
Aug 14 16:10:18.655: INFO: Waiting for statefulset status.replicas updated to 0
Aug 14 16:10:18.659: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 14 16:10:18.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1164" for this suite.

• [SLOW TEST:372.029 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":275,"skipped":4710,"failed":0}
SSSSSSSAug 14 16:10:18.693: INFO: Running AfterSuite actions on all nodes
Aug 14 16:10:18.695: INFO: Running AfterSuite actions on node 1
Aug 14 16:10:18.695: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 7118.616 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS