I0309 08:43:01.155718 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0309 08:43:01.155966 6 e2e.go:109] Starting e2e run "81504a0c-4615-4024-ab3d-e12d1d86561b" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583743380 - Will randomize all specs Will run 278 of 4843 specs Mar 9 08:43:01.260: INFO: >>> kubeConfig: /root/.kube/config Mar 9 08:43:01.264: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 9 08:43:01.290: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 9 08:43:01.328: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 9 08:43:01.328: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 9 08:43:01.328: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 9 08:43:01.339: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 9 08:43:01.339: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 9 08:43:01.339: INFO: e2e test version: v1.17.3 Mar 9 08:43:01.340: INFO: kube-apiserver version: v1.17.2 Mar 9 08:43:01.340: INFO: >>> kubeConfig: /root/.kube/config Mar 9 08:43:01.345: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:43:01.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy Mar 9 08:43:01.418: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-s8sxd in namespace proxy-2515 I0309 08:43:01.486459 6 runners.go:189] Created replication controller with name: proxy-service-s8sxd, namespace: proxy-2515, replica count: 1 I0309 08:43:02.536834 6 runners.go:189] proxy-service-s8sxd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0309 08:43:03.537007 6 runners.go:189] proxy-service-s8sxd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0309 08:43:04.537206 6 runners.go:189] proxy-service-s8sxd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0309 08:43:05.537394 6 runners.go:189] proxy-service-s8sxd Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 9 08:43:05.541: INFO: setup took 4.121459859s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 9 08:43:05.550: INFO: (0) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 8.650421ms) Mar 9 08:43:05.550: INFO: (0) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 9.253875ms) Mar 9 08:43:05.553: INFO: (0) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 12.022163ms) Mar 9 08:43:05.553: INFO: (0) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 11.951224ms) Mar 9 08:43:05.553: INFO: (0) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 12.085426ms) Mar 9 08:43:05.553: INFO: (0) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 12.484941ms) Mar 9 08:43:05.554: INFO: (0) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 13.241085ms) Mar 9 08:43:05.555: INFO: (0) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 14.039561ms) Mar 9 08:43:05.555: INFO: (0) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 14.435916ms) Mar 9 08:43:05.556: INFO: (0) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 14.462919ms) Mar 9 08:43:05.556: INFO: (0) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 14.602554ms) Mar 9 08:43:05.565: INFO: (0) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 24.246112ms) Mar 9 08:43:05.565: INFO: (0) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 24.26529ms) Mar 9 08:43:05.565: INFO: (0) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 24.250058ms) Mar 9 08:43:05.565: INFO: (0) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 24.250859ms) Mar 9 08:43:05.565: INFO: (0) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test<... (200; 8.097129ms) Mar 9 08:43:05.574: INFO: (1) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 8.121088ms) Mar 9 08:43:05.574: INFO: (1) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 8.261229ms) Mar 9 08:43:05.574: INFO: (1) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 8.115932ms) Mar 9 08:43:05.574: INFO: (1) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: ... (200; 8.286771ms) Mar 9 08:43:05.574: INFO: (1) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 8.292787ms) Mar 9 08:43:05.574: INFO: (1) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 8.269925ms) Mar 9 08:43:05.574: INFO: (1) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 8.400893ms) Mar 9 08:43:05.575: INFO: (1) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 9.152737ms) Mar 9 08:43:05.575: INFO: (1) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 9.132603ms) Mar 9 08:43:05.575: INFO: (1) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 9.158484ms) Mar 9 08:43:05.575: INFO: (1) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 9.400031ms) Mar 9 08:43:05.583: INFO: (2) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 7.827794ms) Mar 9 08:43:05.583: INFO: (2) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 8.172137ms) Mar 9 08:43:05.583: INFO: (2) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 8.162645ms) Mar 9 08:43:05.583: INFO: (2) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 8.172562ms) Mar 9 08:43:05.584: INFO: (2) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 8.300996ms) Mar 9 08:43:05.584: INFO: (2) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 8.276843ms) Mar 9 08:43:05.584: INFO: (2) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 8.315897ms) Mar 9 08:43:05.584: INFO: (2) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 8.32463ms) Mar 9 08:43:05.584: INFO: (2) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test<... (200; 8.62282ms) Mar 9 08:43:05.586: INFO: (2) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 10.202565ms) Mar 9 08:43:05.586: INFO: (2) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 10.156805ms) Mar 9 08:43:05.586: INFO: (2) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 10.359883ms) Mar 9 08:43:05.586: INFO: (2) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 10.435782ms) Mar 9 08:43:05.586: INFO: (2) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 10.729999ms) Mar 9 08:43:05.586: INFO: (2) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 10.708579ms) Mar 9 08:43:05.590: INFO: (3) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 3.876185ms) Mar 9 08:43:05.591: INFO: (3) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 4.58176ms) Mar 9 08:43:05.592: INFO: (3) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test (200; 8.082717ms) Mar 9 08:43:05.594: INFO: (3) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 8.167361ms) Mar 9 08:43:05.594: INFO: (3) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 8.143638ms) Mar 9 08:43:05.600: INFO: (4) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 4.306837ms) Mar 9 08:43:05.601: INFO: (4) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 4.853165ms) Mar 9 08:43:05.601: INFO: (4) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 5.5089ms) Mar 9 08:43:05.602: INFO: (4) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 5.757957ms) Mar 9 08:43:05.602: INFO: (4) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 5.926101ms) Mar 9 08:43:05.602: INFO: (4) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 6.129462ms) Mar 9 08:43:05.602: INFO: (4) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test (200; 5.009396ms) Mar 9 08:43:05.608: INFO: (5) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 5.113772ms) Mar 9 08:43:05.608: INFO: (5) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 5.490459ms) Mar 9 08:43:05.608: INFO: (5) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 5.670575ms) Mar 9 08:43:05.608: INFO: (5) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 5.730403ms) Mar 9 08:43:05.609: INFO: (5) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: ... (200; 6.393214ms) Mar 9 08:43:05.609: INFO: (5) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 6.546384ms) Mar 9 08:43:05.609: INFO: (5) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 6.674758ms) Mar 9 08:43:05.609: INFO: (5) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 7.078ms) Mar 9 08:43:05.610: INFO: (5) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 7.269249ms) Mar 9 08:43:05.610: INFO: (5) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 7.31646ms) Mar 9 08:43:05.610: INFO: (5) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 7.348374ms) Mar 9 08:43:05.610: INFO: (5) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 7.566257ms) Mar 9 08:43:05.610: INFO: (5) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 7.535755ms) Mar 9 08:43:05.614: INFO: (6) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 3.847649ms) Mar 9 08:43:05.614: INFO: (6) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 3.993856ms) Mar 9 08:43:05.616: INFO: (6) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 5.83991ms) Mar 9 08:43:05.616: INFO: (6) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 5.823773ms) Mar 9 08:43:05.616: INFO: (6) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 6.251117ms) Mar 9 08:43:05.616: INFO: (6) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test (200; 6.461611ms) Mar 9 08:43:05.617: INFO: (6) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 6.695002ms) Mar 9 08:43:05.617: INFO: (6) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 6.75768ms) Mar 9 08:43:05.617: INFO: (6) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 7.168344ms) Mar 9 08:43:05.618: INFO: (6) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 7.574405ms) Mar 9 08:43:05.618: INFO: (6) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 7.581975ms) Mar 9 08:43:05.622: INFO: (7) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 4.209489ms) Mar 9 08:43:05.622: INFO: (7) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test (200; 9.003372ms) Mar 9 08:43:05.627: INFO: (7) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 9.110796ms) Mar 9 08:43:05.627: INFO: (7) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 9.123858ms) Mar 9 08:43:05.629: INFO: (7) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 10.882125ms) Mar 9 08:43:05.629: INFO: (7) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 11.016574ms) Mar 9 08:43:05.629: INFO: (7) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 11.247608ms) Mar 9 08:43:05.629: INFO: (7) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 11.571956ms) Mar 9 08:43:05.631: INFO: (7) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 13.170741ms) Mar 9 08:43:05.631: INFO: (7) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 13.226718ms) Mar 9 08:43:05.631: INFO: (7) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 13.208495ms) Mar 9 08:43:05.631: INFO: (7) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 13.24661ms) Mar 9 08:43:05.631: INFO: (7) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 13.301532ms) Mar 9 08:43:05.636: INFO: (8) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test<... (200; 4.997661ms) Mar 9 08:43:05.636: INFO: (8) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 5.061541ms) Mar 9 08:43:05.636: INFO: (8) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 5.130925ms) Mar 9 08:43:05.637: INFO: (8) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 5.279899ms) Mar 9 08:43:05.637: INFO: (8) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 5.18116ms) Mar 9 08:43:05.637: INFO: (8) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 5.324326ms) Mar 9 08:43:05.637: INFO: (8) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 5.245703ms) Mar 9 08:43:05.637: INFO: (8) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 5.694995ms) Mar 9 08:43:05.637: INFO: (8) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 5.64604ms) Mar 9 08:43:05.638: INFO: (8) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 6.694665ms) Mar 9 08:43:05.638: INFO: (8) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 6.735767ms) Mar 9 08:43:05.638: INFO: (8) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 6.825036ms) Mar 9 08:43:05.638: INFO: (8) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 7.050708ms) Mar 9 08:43:05.642: INFO: (9) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 3.683975ms) Mar 9 08:43:05.642: INFO: (9) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 3.81794ms) Mar 9 08:43:05.644: INFO: (9) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 4.960625ms) Mar 9 08:43:05.644: INFO: (9) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 5.870246ms) Mar 9 08:43:05.644: INFO: (9) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 5.89777ms) Mar 9 08:43:05.644: INFO: (9) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 5.95394ms) Mar 9 08:43:05.644: INFO: (9) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test<... (200; 5.989362ms) Mar 9 08:43:05.645: INFO: (9) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 6.169431ms) Mar 9 08:43:05.645: INFO: (9) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 6.359719ms) Mar 9 08:43:05.645: INFO: (9) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 6.287519ms) Mar 9 08:43:05.645: INFO: (9) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 6.328979ms) Mar 9 08:43:05.646: INFO: (9) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 7.216813ms) Mar 9 08:43:05.646: INFO: (9) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 7.167136ms) Mar 9 08:43:05.651: INFO: (10) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 4.782853ms) Mar 9 08:43:05.651: INFO: (10) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 4.864819ms) Mar 9 08:43:05.651: INFO: (10) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 5.56644ms) Mar 9 08:43:05.652: INFO: (10) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 5.788802ms) Mar 9 08:43:05.652: INFO: (10) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 5.904386ms) Mar 9 08:43:05.652: INFO: (10) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 6.243088ms) Mar 9 08:43:05.652: INFO: (10) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 6.630128ms) Mar 9 08:43:05.652: INFO: (10) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 6.652086ms) Mar 9 08:43:05.653: INFO: (10) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test (200; 3.497838ms) Mar 9 08:43:05.658: INFO: (11) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 3.784123ms) Mar 9 08:43:05.658: INFO: (11) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 3.83077ms) Mar 9 08:43:05.658: INFO: (11) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 3.838292ms) Mar 9 08:43:05.658: INFO: (11) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 3.89106ms) Mar 9 08:43:05.658: INFO: (11) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 3.972653ms) Mar 9 08:43:05.658: INFO: (11) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 3.957785ms) Mar 9 08:43:05.659: INFO: (11) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 4.696088ms) Mar 9 08:43:05.659: INFO: (11) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 4.714899ms) Mar 9 08:43:05.659: INFO: (11) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 4.818873ms) Mar 9 08:43:05.659: INFO: (11) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 5.088031ms) Mar 9 08:43:05.659: INFO: (11) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 5.110534ms) Mar 9 08:43:05.659: INFO: (11) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 5.076965ms) Mar 9 08:43:05.659: INFO: (11) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test<... (200; 6.552499ms) Mar 9 08:43:05.666: INFO: (12) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 6.519879ms) Mar 9 08:43:05.666: INFO: (12) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 6.576684ms) Mar 9 08:43:05.666: INFO: (12) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 6.530393ms) Mar 9 08:43:05.666: INFO: (12) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test<... (200; 4.959288ms) Mar 9 08:43:05.671: INFO: (13) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 5.000549ms) Mar 9 08:43:05.671: INFO: (13) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 5.090378ms) Mar 9 08:43:05.671: INFO: (13) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: ... (200; 5.60503ms) Mar 9 08:43:05.671: INFO: (13) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 5.623222ms) Mar 9 08:43:05.672: INFO: (13) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 5.927999ms) Mar 9 08:43:05.672: INFO: (13) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 6.031708ms) Mar 9 08:43:05.672: INFO: (13) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 5.953604ms) Mar 9 08:43:05.672: INFO: (13) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 6.022808ms) Mar 9 08:43:05.672: INFO: (13) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 5.993525ms) Mar 9 08:43:05.672: INFO: (13) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 6.049575ms) Mar 9 08:43:05.672: INFO: (13) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 6.037002ms) Mar 9 08:43:05.675: INFO: (14) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 3.277174ms) Mar 9 08:43:05.676: INFO: (14) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: ... (200; 4.929631ms) Mar 9 08:43:05.677: INFO: (14) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 4.795642ms) Mar 9 08:43:05.677: INFO: (14) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 4.874741ms) Mar 9 08:43:05.677: INFO: (14) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 4.872404ms) Mar 9 08:43:05.677: INFO: (14) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 4.834111ms) Mar 9 08:43:05.677: INFO: (14) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 5.077107ms) Mar 9 08:43:05.677: INFO: (14) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 5.025868ms) Mar 9 08:43:05.677: INFO: (14) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 4.910013ms) Mar 9 08:43:05.677: INFO: (14) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 5.135562ms) Mar 9 08:43:05.677: INFO: (14) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 5.153616ms) Mar 9 08:43:05.678: INFO: (14) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 5.354622ms) Mar 9 08:43:05.678: INFO: (14) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 5.561105ms) Mar 9 08:43:05.678: INFO: (14) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 5.726452ms) Mar 9 08:43:05.678: INFO: (14) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 5.828556ms) Mar 9 08:43:05.680: INFO: (15) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 2.284032ms) Mar 9 08:43:05.681: INFO: (15) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 3.15767ms) Mar 9 08:43:05.682: INFO: (15) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 3.82992ms) Mar 9 08:43:05.682: INFO: (15) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 3.92288ms) Mar 9 08:43:05.682: INFO: (15) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 3.999723ms) Mar 9 08:43:05.682: INFO: (15) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 4.038353ms) Mar 9 08:43:05.682: INFO: (15) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 4.095209ms) Mar 9 08:43:05.682: INFO: (15) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 4.284872ms) Mar 9 08:43:05.682: INFO: (15) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 4.342158ms) Mar 9 08:43:05.682: INFO: (15) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 4.211071ms) Mar 9 08:43:05.683: INFO: (15) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test (200; 3.352483ms) Mar 9 08:43:05.687: INFO: (16) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 3.430435ms) Mar 9 08:43:05.687: INFO: (16) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 3.489362ms) Mar 9 08:43:05.687: INFO: (16) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 3.744262ms) Mar 9 08:43:05.687: INFO: (16) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 3.741549ms) Mar 9 08:43:05.687: INFO: (16) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: ... (200; 4.473586ms) Mar 9 08:43:05.688: INFO: (16) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 4.618021ms) Mar 9 08:43:05.688: INFO: (16) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 4.655221ms) Mar 9 08:43:05.688: INFO: (16) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 4.871954ms) Mar 9 08:43:05.688: INFO: (16) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 4.943959ms) Mar 9 08:43:05.688: INFO: (16) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 4.89125ms) Mar 9 08:43:05.688: INFO: (16) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 4.919505ms) Mar 9 08:43:05.692: INFO: (17) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 3.782237ms) Mar 9 08:43:05.693: INFO: (17) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 4.314569ms) Mar 9 08:43:05.693: INFO: (17) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 4.314486ms) Mar 9 08:43:05.693: INFO: (17) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 4.358873ms) Mar 9 08:43:05.693: INFO: (17) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 4.431085ms) Mar 9 08:43:05.693: INFO: (17) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 4.378442ms) Mar 9 08:43:05.693: INFO: (17) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 4.423072ms) Mar 9 08:43:05.693: INFO: (17) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:1080/proxy/: test<... (200; 4.695175ms) Mar 9 08:43:05.693: INFO: (17) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 4.650506ms) Mar 9 08:43:05.693: INFO: (17) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:460/proxy/: tls baz (200; 4.624133ms) Mar 9 08:43:05.693: INFO: (17) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test<... (200; 3.008264ms) Mar 9 08:43:05.697: INFO: (18) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 3.007029ms) Mar 9 08:43:05.697: INFO: (18) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 3.171436ms) Mar 9 08:43:05.697: INFO: (18) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 3.25288ms) Mar 9 08:43:05.703: INFO: (18) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 9.122412ms) Mar 9 08:43:05.703: INFO: (18) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 9.610126ms) Mar 9 08:43:05.703: INFO: (18) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 9.743528ms) Mar 9 08:43:05.703: INFO: (18) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 9.684805ms) Mar 9 08:43:05.703: INFO: (18) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 9.683882ms) Mar 9 08:43:05.703: INFO: (18) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:443/proxy/: test<... (200; 5.53017ms) Mar 9 08:43:05.712: INFO: (19) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g/proxy/: test (200; 5.632921ms) Mar 9 08:43:05.712: INFO: (19) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:162/proxy/: bar (200; 5.323189ms) Mar 9 08:43:05.712: INFO: (19) /api/v1/namespaces/proxy-2515/pods/https:proxy-service-s8sxd-nz58g:462/proxy/: tls qux (200; 5.246196ms) Mar 9 08:43:05.712: INFO: (19) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 5.954776ms) Mar 9 08:43:05.712: INFO: (19) /api/v1/namespaces/proxy-2515/pods/http:proxy-service-s8sxd-nz58g:1080/proxy/: ... (200; 6.14926ms) Mar 9 08:43:05.712: INFO: (19) /api/v1/namespaces/proxy-2515/pods/proxy-service-s8sxd-nz58g:160/proxy/: foo (200; 6.109758ms) Mar 9 08:43:05.712: INFO: (19) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname2/proxy/: bar (200; 6.383041ms) Mar 9 08:43:05.712: INFO: (19) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname1/proxy/: foo (200; 7.011122ms) Mar 9 08:43:05.713: INFO: (19) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname1/proxy/: tls baz (200; 7.072562ms) Mar 9 08:43:05.713: INFO: (19) /api/v1/namespaces/proxy-2515/services/http:proxy-service-s8sxd:portname1/proxy/: foo (200; 6.698973ms) Mar 9 08:43:05.713: INFO: (19) /api/v1/namespaces/proxy-2515/services/proxy-service-s8sxd:portname2/proxy/: bar (200; 6.44945ms) Mar 9 08:43:05.713: INFO: (19) /api/v1/namespaces/proxy-2515/services/https:proxy-service-s8sxd:tlsportname2/proxy/: tls qux (200; 6.019828ms) STEP: deleting ReplicationController proxy-service-s8sxd in namespace proxy-2515, will wait for the garbage collector to delete the pods Mar 9 08:43:05.782: INFO: Deleting ReplicationController proxy-service-s8sxd took: 16.87026ms Mar 9 08:43:06.082: INFO: Terminating ReplicationController proxy-service-s8sxd pods took: 300.217308ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:43:08.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2515" for this suite. • [SLOW TEST:6.879 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":1,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:43:08.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-fee282a7-b31e-49a0-89aa-9e755b2c0eca STEP: Creating secret with name s-test-opt-upd-b7130573-89fa-4f1d-b0b4-ff532d6c4dbf STEP: Creating the pod STEP: Deleting secret s-test-opt-del-fee282a7-b31e-49a0-89aa-9e755b2c0eca STEP: Updating secret s-test-opt-upd-b7130573-89fa-4f1d-b0b4-ff532d6c4dbf STEP: Creating secret with name s-test-opt-create-91d927c5-0c47-41c1-85a6-af37381090aa STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:44:24.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-459" for this suite. • [SLOW TEST:76.604 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":44,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:44:24.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-a6aa49c6-d725-474f-97d1-a0c7569b3212 STEP: Creating configMap with name cm-test-opt-upd-4418bee5-0fdb-40a5-b869-6110a5282181 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a6aa49c6-d725-474f-97d1-a0c7569b3212 STEP: Updating configmap cm-test-opt-upd-4418bee5-0fdb-40a5-b869-6110a5282181 STEP: Creating configMap with name cm-test-opt-create-0c32d5c8-6a42-4b19-9ff5-a432fe6ec94f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:45:55.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9882" for this suite. • [SLOW TEST:90.896 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":57,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:45:55.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 08:45:55.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee08155c-a22f-40dd-a3d5-68c584050e0b" in namespace "downward-api-5868" to be "success or failure" Mar 9 08:45:55.810: INFO: Pod "downwardapi-volume-ee08155c-a22f-40dd-a3d5-68c584050e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.269048ms Mar 9 08:45:57.814: INFO: Pod "downwardapi-volume-ee08155c-a22f-40dd-a3d5-68c584050e0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007269851s STEP: Saw pod success Mar 9 08:45:57.814: INFO: Pod "downwardapi-volume-ee08155c-a22f-40dd-a3d5-68c584050e0b" satisfied condition "success or failure" Mar 9 08:45:57.817: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ee08155c-a22f-40dd-a3d5-68c584050e0b container client-container: STEP: delete the pod Mar 9 08:45:57.862: INFO: Waiting for pod downwardapi-volume-ee08155c-a22f-40dd-a3d5-68c584050e0b to disappear Mar 9 08:45:57.870: INFO: Pod downwardapi-volume-ee08155c-a22f-40dd-a3d5-68c584050e0b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:45:57.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5868" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":63,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:45:57.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 9 08:45:57.962: INFO: Waiting up to 5m0s for pod "pod-4d29d14a-f609-4f85-8a23-38d80bb8d27d" in namespace "emptydir-6229" to be "success or failure" Mar 9 08:45:57.966: INFO: Pod "pod-4d29d14a-f609-4f85-8a23-38d80bb8d27d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.479329ms Mar 9 08:45:59.970: INFO: Pod "pod-4d29d14a-f609-4f85-8a23-38d80bb8d27d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007824893s STEP: Saw pod success Mar 9 08:45:59.970: INFO: Pod "pod-4d29d14a-f609-4f85-8a23-38d80bb8d27d" satisfied condition "success or failure" Mar 9 08:45:59.973: INFO: Trying to get logs from node jerma-worker2 pod pod-4d29d14a-f609-4f85-8a23-38d80bb8d27d container test-container: STEP: delete the pod Mar 9 08:45:59.994: INFO: Waiting for pod pod-4d29d14a-f609-4f85-8a23-38d80bb8d27d to disappear Mar 9 08:45:59.998: INFO: Pod pod-4d29d14a-f609-4f85-8a23-38d80bb8d27d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:45:59.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6229" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:46:00.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-be1932d8-e3b1-45a8-803e-5e29732bb25b STEP: Creating a pod to test consume secrets Mar 9 08:46:00.088: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71a99272-dc1b-47a6-b388-e2b7d1d6e75c" in namespace "projected-8144" to be "success or failure" Mar 9 08:46:00.094: INFO: Pod "pod-projected-secrets-71a99272-dc1b-47a6-b388-e2b7d1d6e75c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.451483ms Mar 9 08:46:02.098: INFO: Pod "pod-projected-secrets-71a99272-dc1b-47a6-b388-e2b7d1d6e75c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009086412s STEP: Saw pod success Mar 9 08:46:02.098: INFO: Pod "pod-projected-secrets-71a99272-dc1b-47a6-b388-e2b7d1d6e75c" satisfied condition "success or failure" Mar 9 08:46:02.101: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-71a99272-dc1b-47a6-b388-e2b7d1d6e75c container projected-secret-volume-test: STEP: delete the pod Mar 9 08:46:02.119: INFO: Waiting for pod pod-projected-secrets-71a99272-dc1b-47a6-b388-e2b7d1d6e75c to disappear Mar 9 08:46:02.124: INFO: Pod pod-projected-secrets-71a99272-dc1b-47a6-b388-e2b7d1d6e75c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:46:02.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8144" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:46:02.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 9 08:46:02.201: INFO: Waiting up to 5m0s for pod "pod-dabea7e8-f487-46a9-a869-bb0a803f0b3b" in namespace "emptydir-8661" to be "success or failure" Mar 9 08:46:02.217: INFO: Pod "pod-dabea7e8-f487-46a9-a869-bb0a803f0b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.360687ms Mar 9 08:46:04.221: INFO: Pod "pod-dabea7e8-f487-46a9-a869-bb0a803f0b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02016887s Mar 9 08:46:06.225: INFO: Pod "pod-dabea7e8-f487-46a9-a869-bb0a803f0b3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023976418s STEP: Saw pod success Mar 9 08:46:06.225: INFO: Pod "pod-dabea7e8-f487-46a9-a869-bb0a803f0b3b" satisfied condition "success or failure" Mar 9 08:46:06.227: INFO: Trying to get logs from node jerma-worker2 pod pod-dabea7e8-f487-46a9-a869-bb0a803f0b3b container test-container: STEP: delete the pod Mar 9 08:46:06.249: INFO: Waiting for pod pod-dabea7e8-f487-46a9-a869-bb0a803f0b3b to disappear Mar 9 08:46:06.253: INFO: Pod pod-dabea7e8-f487-46a9-a869-bb0a803f0b3b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:46:06.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8661" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":147,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:46:06.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 08:46:06.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3948a44c-d86b-4860-8d6b-102cf7d9b10d" in namespace "downward-api-525" to be "success or failure" Mar 9 08:46:06.385: INFO: Pod "downwardapi-volume-3948a44c-d86b-4860-8d6b-102cf7d9b10d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.689475ms Mar 9 08:46:08.389: INFO: Pod "downwardapi-volume-3948a44c-d86b-4860-8d6b-102cf7d9b10d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.031074153s STEP: Saw pod success Mar 9 08:46:08.389: INFO: Pod "downwardapi-volume-3948a44c-d86b-4860-8d6b-102cf7d9b10d" satisfied condition "success or failure" Mar 9 08:46:08.391: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3948a44c-d86b-4860-8d6b-102cf7d9b10d container client-container: STEP: delete the pod Mar 9 08:46:08.418: INFO: Waiting for pod downwardapi-volume-3948a44c-d86b-4860-8d6b-102cf7d9b10d to disappear Mar 9 08:46:08.428: INFO: Pod downwardapi-volume-3948a44c-d86b-4860-8d6b-102cf7d9b10d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:46:08.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-525" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":161,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:46:08.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 08:46:08.615: INFO: Create a RollingUpdate DaemonSet Mar 9 08:46:08.618: INFO: Check that daemon pods launch on every node of the cluster Mar 9 08:46:08.641: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:46:08.649: INFO: Number of nodes with available pods: 0 Mar 9 08:46:08.649: INFO: Node jerma-worker is running more than one daemon pod Mar 9 08:46:09.658: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:46:09.662: INFO: Number of nodes with available pods: 0 Mar 9 08:46:09.662: INFO: Node jerma-worker is running more than one daemon pod Mar 9 08:46:10.653: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:46:10.657: INFO: Number of nodes with available pods: 2 Mar 9 08:46:10.657: INFO: Number of running nodes: 2, number of available pods: 2 Mar 9 08:46:10.657: INFO: Update the DaemonSet to trigger a rollout Mar 9 08:46:10.665: INFO: Updating DaemonSet daemon-set Mar 9 08:46:14.692: INFO: Roll back the DaemonSet before rollout is complete Mar 9 08:46:14.698: INFO: Updating DaemonSet daemon-set Mar 9 08:46:14.698: INFO: Make sure DaemonSet rollback is complete Mar 9 08:46:14.718: INFO: Wrong image for pod: daemon-set-pj2jg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 9 08:46:14.718: INFO: Pod daemon-set-pj2jg is not available Mar 9 08:46:14.754: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:46:15.757: INFO: Wrong image for pod: daemon-set-pj2jg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 9 08:46:15.758: INFO: Pod daemon-set-pj2jg is not available Mar 9 08:46:15.761: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:46:16.759: INFO: Pod daemon-set-22fnj is not available Mar 9 08:46:16.763: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5040, will wait for the garbage collector to delete the pods Mar 9 08:46:16.826: INFO: Deleting DaemonSet.extensions daemon-set took: 5.093879ms Mar 9 08:46:17.127: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.221723ms Mar 9 08:46:26.146: INFO: Number of nodes with available pods: 0 Mar 9 08:46:26.146: INFO: Number of running nodes: 0, number of available pods: 0 Mar 9 08:46:26.152: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5040/daemonsets","resourceVersion":"256430"},"items":null} Mar 9 08:46:26.155: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5040/pods","resourceVersion":"256430"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:46:26.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5040" for this suite. • [SLOW TEST:17.694 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":9,"skipped":174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:46:26.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:46:39.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7948" for this suite. • [SLOW TEST:13.173 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":10,"skipped":237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:46:39.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 9 08:46:39.453: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:46:39.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7358" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":11,"skipped":286,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:46:39.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-b57a0ad4-a2c0-47ad-864e-fa9886e46bdb STEP: Creating a pod to test consume configMaps Mar 9 08:46:39.642: INFO: Waiting up to 5m0s for pod "pod-configmaps-f253316b-0313-4dea-8a44-6a6184dbc82e" in namespace "configmap-9508" to be "success or failure" Mar 9 08:46:39.664: INFO: Pod "pod-configmaps-f253316b-0313-4dea-8a44-6a6184dbc82e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.891261ms Mar 9 08:46:41.668: INFO: Pod "pod-configmaps-f253316b-0313-4dea-8a44-6a6184dbc82e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025480697s STEP: Saw pod success Mar 9 08:46:41.668: INFO: Pod "pod-configmaps-f253316b-0313-4dea-8a44-6a6184dbc82e" satisfied condition "success or failure" Mar 9 08:46:41.670: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f253316b-0313-4dea-8a44-6a6184dbc82e container configmap-volume-test: STEP: delete the pod Mar 9 08:46:41.683: INFO: Waiting for pod pod-configmaps-f253316b-0313-4dea-8a44-6a6184dbc82e to disappear Mar 9 08:46:41.702: INFO: Pod pod-configmaps-f253316b-0313-4dea-8a44-6a6184dbc82e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:46:41.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9508" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:46:41.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4508, will wait for the garbage collector to delete the pods Mar 9 08:46:43.853: INFO: Deleting Job.batch foo took: 6.430125ms Mar 9 08:46:44.153: INFO: Terminating Job.batch foo pods took: 300.289045ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:47:26.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4508" for this suite. • [SLOW TEST:44.557 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":13,"skipped":321,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:47:26.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 08:47:26.380: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff2ca768-d4f6-429e-8fc8-d9a94cb9f3c8" in namespace "downward-api-9755" to be "success or failure" Mar 9 08:47:26.384: INFO: Pod "downwardapi-volume-ff2ca768-d4f6-429e-8fc8-d9a94cb9f3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.475247ms Mar 9 08:47:28.388: INFO: Pod "downwardapi-volume-ff2ca768-d4f6-429e-8fc8-d9a94cb9f3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00785436s Mar 9 08:47:30.392: INFO: Pod "downwardapi-volume-ff2ca768-d4f6-429e-8fc8-d9a94cb9f3c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011384761s STEP: Saw pod success Mar 9 08:47:30.392: INFO: Pod "downwardapi-volume-ff2ca768-d4f6-429e-8fc8-d9a94cb9f3c8" satisfied condition "success or failure" Mar 9 08:47:30.394: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ff2ca768-d4f6-429e-8fc8-d9a94cb9f3c8 container client-container: STEP: delete the pod Mar 9 08:47:30.424: INFO: Waiting for pod downwardapi-volume-ff2ca768-d4f6-429e-8fc8-d9a94cb9f3c8 to disappear Mar 9 08:47:30.432: INFO: Pod downwardapi-volume-ff2ca768-d4f6-429e-8fc8-d9a94cb9f3c8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:47:30.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9755" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":322,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:47:30.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 9 08:47:30.518: INFO: Waiting up to 5m0s for pod "pod-38800cf1-c568-421d-9088-a5261ff7bc3c" in namespace "emptydir-1519" to be "success or failure" Mar 9 08:47:30.522: INFO: Pod "pod-38800cf1-c568-421d-9088-a5261ff7bc3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244178ms Mar 9 08:47:32.526: INFO: Pod "pod-38800cf1-c568-421d-9088-a5261ff7bc3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00830426s Mar 9 08:47:34.530: INFO: Pod "pod-38800cf1-c568-421d-9088-a5261ff7bc3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012164448s STEP: Saw pod success Mar 9 08:47:34.530: INFO: Pod "pod-38800cf1-c568-421d-9088-a5261ff7bc3c" satisfied condition "success or failure" Mar 9 08:47:34.533: INFO: Trying to get logs from node jerma-worker2 pod pod-38800cf1-c568-421d-9088-a5261ff7bc3c container test-container: STEP: delete the pod Mar 9 08:47:34.571: INFO: Waiting for pod pod-38800cf1-c568-421d-9088-a5261ff7bc3c to disappear Mar 9 08:47:34.579: INFO: Pod pod-38800cf1-c568-421d-9088-a5261ff7bc3c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:47:34.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1519" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":341,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:47:34.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 9 08:47:34.664: INFO: Waiting up to 5m0s for pod "client-containers-4c02e47a-55a0-4822-9fae-927706597beb" in namespace "containers-6707" to be "success or failure" Mar 9 08:47:34.669: INFO: Pod "client-containers-4c02e47a-55a0-4822-9fae-927706597beb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.717878ms Mar 9 08:47:36.673: INFO: Pod "client-containers-4c02e47a-55a0-4822-9fae-927706597beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00874652s STEP: Saw pod success Mar 9 08:47:36.673: INFO: Pod "client-containers-4c02e47a-55a0-4822-9fae-927706597beb" satisfied condition "success or failure" Mar 9 08:47:36.676: INFO: Trying to get logs from node jerma-worker2 pod client-containers-4c02e47a-55a0-4822-9fae-927706597beb container test-container: STEP: delete the pod Mar 9 08:47:36.867: INFO: Waiting for pod client-containers-4c02e47a-55a0-4822-9fae-927706597beb to disappear Mar 9 08:47:36.878: INFO: Pod client-containers-4c02e47a-55a0-4822-9fae-927706597beb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:47:36.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6707" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":344,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:47:36.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 08:47:36.956: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be15c034-98c1-4658-9eed-515c10cbce42" in namespace "downward-api-5634" to be "success or failure" Mar 9 08:47:36.958: INFO: Pod "downwardapi-volume-be15c034-98c1-4658-9eed-515c10cbce42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327653ms Mar 9 08:47:38.963: INFO: Pod "downwardapi-volume-be15c034-98c1-4658-9eed-515c10cbce42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007353815s STEP: Saw pod success Mar 9 08:47:38.963: INFO: Pod "downwardapi-volume-be15c034-98c1-4658-9eed-515c10cbce42" satisfied condition "success or failure" Mar 9 08:47:38.966: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-be15c034-98c1-4658-9eed-515c10cbce42 container client-container: STEP: delete the pod Mar 9 08:47:39.029: INFO: Waiting for pod downwardapi-volume-be15c034-98c1-4658-9eed-515c10cbce42 to disappear Mar 9 08:47:39.040: INFO: Pod downwardapi-volume-be15c034-98c1-4658-9eed-515c10cbce42 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:47:39.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5634" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":360,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:47:39.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2167 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2167 STEP: creating replication controller externalsvc in namespace services-2167 I0309 08:47:39.215948 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2167, replica count: 2 I0309 08:47:42.266304 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 9 08:47:42.335: INFO: Creating new exec pod Mar 9 08:47:44.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2167 execpodzppb5 -- /bin/sh -x -c nslookup nodeport-service' Mar 9 08:47:46.051: INFO: stderr: "I0309 08:47:45.961191 50 log.go:172] (0xc0003c11e0) (0xc00062de00) Create stream\nI0309 08:47:45.961240 50 log.go:172] (0xc0003c11e0) (0xc00062de00) Stream added, broadcasting: 1\nI0309 08:47:45.964012 50 log.go:172] (0xc0003c11e0) Reply frame received for 1\nI0309 08:47:45.964066 50 log.go:172] (0xc0003c11e0) (0xc00062dea0) Create stream\nI0309 08:47:45.964087 50 log.go:172] (0xc0003c11e0) (0xc00062dea0) Stream added, broadcasting: 3\nI0309 08:47:45.965332 50 log.go:172] (0xc0003c11e0) Reply frame received for 3\nI0309 08:47:45.965436 50 log.go:172] (0xc0003c11e0) (0xc0005ca640) Create stream\nI0309 08:47:45.965458 50 log.go:172] (0xc0003c11e0) (0xc0005ca640) Stream added, broadcasting: 5\nI0309 08:47:45.966529 50 log.go:172] (0xc0003c11e0) Reply frame received for 5\nI0309 08:47:46.034036 50 log.go:172] (0xc0003c11e0) Data frame received for 5\nI0309 08:47:46.034059 50 log.go:172] (0xc0005ca640) (5) Data frame handling\nI0309 08:47:46.034075 50 log.go:172] (0xc0005ca640) (5) Data frame sent\n+ nslookup nodeport-service\nI0309 08:47:46.042241 50 log.go:172] (0xc0003c11e0) Data frame received for 3\nI0309 08:47:46.042263 50 log.go:172] (0xc00062dea0) (3) Data frame handling\nI0309 08:47:46.042278 50 log.go:172] (0xc00062dea0) (3) Data frame sent\nI0309 08:47:46.044635 50 log.go:172] (0xc0003c11e0) Data frame received for 3\nI0309 08:47:46.044654 50 log.go:172] (0xc00062dea0) (3) Data frame handling\nI0309 08:47:46.044676 50 log.go:172] (0xc00062dea0) (3) Data frame sent\nI0309 08:47:46.045001 50 log.go:172] (0xc0003c11e0) Data frame received for 3\nI0309 08:47:46.045024 50 log.go:172] (0xc00062dea0) (3) Data frame handling\nI0309 08:47:46.045050 50 log.go:172] (0xc0003c11e0) Data frame received for 5\nI0309 08:47:46.045070 50 log.go:172] (0xc0005ca640) (5) Data frame handling\nI0309 08:47:46.046941 50 log.go:172] (0xc0003c11e0) Data frame received for 1\nI0309 08:47:46.046979 50 log.go:172] (0xc00062de00) (1) Data frame handling\nI0309 08:47:46.047002 50 log.go:172] (0xc00062de00) (1) Data frame sent\nI0309 08:47:46.047024 50 log.go:172] (0xc0003c11e0) (0xc00062de00) Stream removed, broadcasting: 1\nI0309 08:47:46.047138 50 log.go:172] (0xc0003c11e0) Go away received\nI0309 08:47:46.047425 50 log.go:172] (0xc0003c11e0) (0xc00062de00) Stream removed, broadcasting: 1\nI0309 08:47:46.047458 50 log.go:172] (0xc0003c11e0) (0xc00062dea0) Stream removed, broadcasting: 3\nI0309 08:47:46.047470 50 log.go:172] (0xc0003c11e0) (0xc0005ca640) Stream removed, broadcasting: 5\n" Mar 9 08:47:46.051: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2167.svc.cluster.local\tcanonical name = externalsvc.services-2167.svc.cluster.local.\nName:\texternalsvc.services-2167.svc.cluster.local\nAddress: 10.101.46.3\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2167, will wait for the garbage collector to delete the pods Mar 9 08:47:46.120: INFO: Deleting ReplicationController externalsvc took: 5.584777ms Mar 9 08:47:46.420: INFO: Terminating ReplicationController externalsvc pods took: 300.237095ms Mar 9 08:47:50.655: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:47:50.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2167" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.632 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":18,"skipped":366,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:47:50.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 9 08:47:50.765: INFO: Waiting up to 5m0s for pod "downward-api-67c857f9-de5d-4a4f-9533-3237fdfb693f" in namespace "downward-api-94" to be "success or failure" Mar 9 08:47:50.772: INFO: Pod "downward-api-67c857f9-de5d-4a4f-9533-3237fdfb693f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.959747ms Mar 9 08:47:52.776: INFO: Pod "downward-api-67c857f9-de5d-4a4f-9533-3237fdfb693f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010881462s STEP: Saw pod success Mar 9 08:47:52.776: INFO: Pod "downward-api-67c857f9-de5d-4a4f-9533-3237fdfb693f" satisfied condition "success or failure" Mar 9 08:47:52.779: INFO: Trying to get logs from node jerma-worker pod downward-api-67c857f9-de5d-4a4f-9533-3237fdfb693f container dapi-container: STEP: delete the pod Mar 9 08:47:52.816: INFO: Waiting for pod downward-api-67c857f9-de5d-4a4f-9533-3237fdfb693f to disappear Mar 9 08:47:52.823: INFO: Pod downward-api-67c857f9-de5d-4a4f-9533-3237fdfb693f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:47:52.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-94" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":373,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:47:52.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-5ea9e0a8-f074-40a8-8c89-3696d3e2f6a7 STEP: Creating a pod to test consume configMaps Mar 9 08:47:52.905: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6807a814-4d35-46cd-a09d-bac5ecefc2b2" in namespace "projected-9747" to be "success or failure" Mar 9 08:47:52.908: INFO: Pod "pod-projected-configmaps-6807a814-4d35-46cd-a09d-bac5ecefc2b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084179ms Mar 9 08:47:54.912: INFO: Pod "pod-projected-configmaps-6807a814-4d35-46cd-a09d-bac5ecefc2b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006338001s STEP: Saw pod success Mar 9 08:47:54.912: INFO: Pod "pod-projected-configmaps-6807a814-4d35-46cd-a09d-bac5ecefc2b2" satisfied condition "success or failure" Mar 9 08:47:54.915: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6807a814-4d35-46cd-a09d-bac5ecefc2b2 container projected-configmap-volume-test: STEP: delete the pod Mar 9 08:47:54.939: INFO: Waiting for pod pod-projected-configmaps-6807a814-4d35-46cd-a09d-bac5ecefc2b2 to disappear Mar 9 08:47:54.943: INFO: Pod pod-projected-configmaps-6807a814-4d35-46cd-a09d-bac5ecefc2b2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:47:54.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9747" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":375,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:47:54.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:47:55.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7335" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":21,"skipped":383,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:47:55.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1045 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 9 08:47:55.147: INFO: Found 0 stateful pods, waiting for 3 Mar 9 08:48:05.152: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 9 08:48:05.152: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 9 08:48:05.152: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 9 08:48:05.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1045 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 08:48:05.417: INFO: stderr: "I0309 08:48:05.320189 72 log.go:172] (0xc00002a210) (0xc000a2c000) Create stream\nI0309 08:48:05.320276 72 log.go:172] (0xc00002a210) (0xc000a2c000) Stream added, broadcasting: 1\nI0309 08:48:05.322441 72 log.go:172] (0xc00002a210) Reply frame received for 1\nI0309 08:48:05.322476 72 log.go:172] (0xc00002a210) (0xc00076b540) Create stream\nI0309 08:48:05.322488 72 log.go:172] (0xc00002a210) (0xc00076b540) Stream added, broadcasting: 3\nI0309 08:48:05.323158 72 log.go:172] (0xc00002a210) Reply frame received for 3\nI0309 08:48:05.323188 72 log.go:172] (0xc00002a210) (0xc000679ae0) Create stream\nI0309 08:48:05.323198 72 log.go:172] (0xc00002a210) (0xc000679ae0) Stream added, broadcasting: 5\nI0309 08:48:05.324618 72 log.go:172] (0xc00002a210) Reply frame received for 5\nI0309 08:48:05.384835 72 log.go:172] (0xc00002a210) Data frame received for 5\nI0309 08:48:05.384856 72 log.go:172] (0xc000679ae0) (5) Data frame handling\nI0309 08:48:05.384863 72 log.go:172] (0xc000679ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 08:48:05.412271 72 log.go:172] (0xc00002a210) Data frame received for 3\nI0309 08:48:05.412310 72 log.go:172] (0xc00076b540) (3) Data frame handling\nI0309 08:48:05.412321 72 log.go:172] (0xc00076b540) (3) Data frame sent\nI0309 08:48:05.412339 72 log.go:172] (0xc00002a210) Data frame received for 5\nI0309 08:48:05.412346 72 log.go:172] (0xc000679ae0) (5) Data frame handling\nI0309 08:48:05.412378 72 log.go:172] (0xc00002a210) Data frame received for 3\nI0309 08:48:05.412404 72 log.go:172] (0xc00076b540) (3) Data frame handling\nI0309 08:48:05.414061 72 log.go:172] (0xc00002a210) Data frame received for 1\nI0309 08:48:05.414083 72 log.go:172] (0xc000a2c000) (1) Data frame handling\nI0309 08:48:05.414092 72 log.go:172] (0xc000a2c000) (1) Data frame sent\nI0309 08:48:05.414107 72 log.go:172] (0xc00002a210) (0xc000a2c000) Stream removed, broadcasting: 1\nI0309 08:48:05.414167 72 log.go:172] (0xc00002a210) Go away received\nI0309 08:48:05.414499 72 log.go:172] (0xc00002a210) (0xc000a2c000) Stream removed, broadcasting: 1\nI0309 08:48:05.414516 72 log.go:172] (0xc00002a210) (0xc00076b540) Stream removed, broadcasting: 3\nI0309 08:48:05.414526 72 log.go:172] (0xc00002a210) (0xc000679ae0) Stream removed, broadcasting: 5\n" Mar 9 08:48:05.417: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 08:48:05.417: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 9 08:48:15.447: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 9 08:48:25.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1045 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 08:48:25.715: INFO: stderr: "I0309 08:48:25.641996 90 log.go:172] (0xc0009a2000) (0xc000727900) Create stream\nI0309 08:48:25.642045 90 log.go:172] (0xc0009a2000) (0xc000727900) Stream added, broadcasting: 1\nI0309 08:48:25.643525 90 log.go:172] (0xc0009a2000) Reply frame received for 1\nI0309 08:48:25.643574 90 log.go:172] (0xc0009a2000) (0xc0007279a0) Create stream\nI0309 08:48:25.643592 90 log.go:172] (0xc0009a2000) (0xc0007279a0) Stream added, broadcasting: 3\nI0309 08:48:25.644684 90 log.go:172] (0xc0009a2000) Reply frame received for 3\nI0309 08:48:25.644713 90 log.go:172] (0xc0009a2000) (0xc00041f360) Create stream\nI0309 08:48:25.644721 90 log.go:172] (0xc0009a2000) (0xc00041f360) Stream added, broadcasting: 5\nI0309 08:48:25.645575 90 log.go:172] (0xc0009a2000) Reply frame received for 5\nI0309 08:48:25.711335 90 log.go:172] (0xc0009a2000) Data frame received for 5\nI0309 08:48:25.711365 90 log.go:172] (0xc00041f360) (5) Data frame handling\nI0309 08:48:25.711374 90 log.go:172] (0xc00041f360) (5) Data frame sent\nI0309 08:48:25.711381 90 log.go:172] (0xc0009a2000) Data frame received for 5\nI0309 08:48:25.711385 90 log.go:172] (0xc00041f360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 08:48:25.711441 90 log.go:172] (0xc0009a2000) Data frame received for 3\nI0309 08:48:25.711471 90 log.go:172] (0xc0007279a0) (3) Data frame handling\nI0309 08:48:25.711484 90 log.go:172] (0xc0007279a0) (3) Data frame sent\nI0309 08:48:25.711493 90 log.go:172] (0xc0009a2000) Data frame received for 3\nI0309 08:48:25.711502 90 log.go:172] (0xc0007279a0) (3) Data frame handling\nI0309 08:48:25.712219 90 log.go:172] (0xc0009a2000) Data frame received for 1\nI0309 08:48:25.712240 90 log.go:172] (0xc000727900) (1) Data frame handling\nI0309 08:48:25.712254 90 log.go:172] (0xc000727900) (1) Data frame sent\nI0309 08:48:25.712296 90 log.go:172] (0xc0009a2000) (0xc000727900) Stream removed, broadcasting: 1\nI0309 08:48:25.712349 90 log.go:172] (0xc0009a2000) Go away received\nI0309 08:48:25.712663 90 log.go:172] (0xc0009a2000) (0xc000727900) Stream removed, broadcasting: 1\nI0309 08:48:25.712680 90 log.go:172] (0xc0009a2000) (0xc0007279a0) Stream removed, broadcasting: 3\nI0309 08:48:25.712687 90 log.go:172] (0xc0009a2000) (0xc00041f360) Stream removed, broadcasting: 5\n" Mar 9 08:48:25.716: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 08:48:25.716: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 08:48:35.780: INFO: Waiting for StatefulSet statefulset-1045/ss2 to complete update Mar 9 08:48:35.780: INFO: Waiting for Pod statefulset-1045/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 9 08:48:35.780: INFO: Waiting for Pod statefulset-1045/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 9 08:48:35.780: INFO: Waiting for Pod statefulset-1045/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 9 08:48:45.789: INFO: Waiting for StatefulSet statefulset-1045/ss2 to complete update Mar 9 08:48:45.789: INFO: Waiting for Pod statefulset-1045/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 9 08:48:45.789: INFO: Waiting for Pod statefulset-1045/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 9 08:48:55.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1045 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 08:48:56.052: INFO: stderr: "I0309 08:48:55.938704 110 log.go:172] (0xc00091fa20) (0xc000978960) Create stream\nI0309 08:48:55.938744 110 log.go:172] (0xc00091fa20) (0xc000978960) Stream added, broadcasting: 1\nI0309 08:48:55.940568 110 log.go:172] (0xc00091fa20) Reply frame received for 1\nI0309 08:48:55.940595 110 log.go:172] (0xc00091fa20) (0xc000c1a140) Create stream\nI0309 08:48:55.940603 110 log.go:172] (0xc00091fa20) (0xc000c1a140) Stream added, broadcasting: 3\nI0309 08:48:55.941301 110 log.go:172] (0xc00091fa20) Reply frame received for 3\nI0309 08:48:55.941327 110 log.go:172] (0xc00091fa20) (0xc0003114a0) Create stream\nI0309 08:48:55.941338 110 log.go:172] (0xc00091fa20) (0xc0003114a0) Stream added, broadcasting: 5\nI0309 08:48:55.942072 110 log.go:172] (0xc00091fa20) Reply frame received for 5\nI0309 08:48:56.004866 110 log.go:172] (0xc00091fa20) Data frame received for 5\nI0309 08:48:56.004887 110 log.go:172] (0xc0003114a0) (5) Data frame handling\nI0309 08:48:56.004900 110 log.go:172] (0xc0003114a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 08:48:56.047602 110 log.go:172] (0xc00091fa20) Data frame received for 3\nI0309 08:48:56.047635 110 log.go:172] (0xc000c1a140) (3) Data frame handling\nI0309 08:48:56.047656 110 log.go:172] (0xc000c1a140) (3) Data frame sent\nI0309 08:48:56.047719 110 log.go:172] (0xc00091fa20) Data frame received for 3\nI0309 08:48:56.047734 110 log.go:172] (0xc000c1a140) (3) Data frame handling\nI0309 08:48:56.048191 110 log.go:172] (0xc00091fa20) Data frame received for 5\nI0309 08:48:56.048209 110 log.go:172] (0xc0003114a0) (5) Data frame handling\nI0309 08:48:56.049663 110 log.go:172] (0xc00091fa20) Data frame received for 1\nI0309 08:48:56.049694 110 log.go:172] (0xc000978960) (1) Data frame handling\nI0309 08:48:56.049715 110 log.go:172] (0xc000978960) (1) Data frame sent\nI0309 08:48:56.049736 110 log.go:172] (0xc00091fa20) (0xc000978960) Stream removed, broadcasting: 1\nI0309 08:48:56.049763 110 log.go:172] (0xc00091fa20) Go away received\nI0309 08:48:56.050107 110 log.go:172] (0xc00091fa20) (0xc000978960) Stream removed, broadcasting: 1\nI0309 08:48:56.050161 110 log.go:172] (0xc00091fa20) (0xc000c1a140) Stream removed, broadcasting: 3\nI0309 08:48:56.050171 110 log.go:172] (0xc00091fa20) (0xc0003114a0) Stream removed, broadcasting: 5\n" Mar 9 08:48:56.053: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 08:48:56.053: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 08:49:06.082: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 9 08:49:16.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1045 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 08:49:16.307: INFO: stderr: "I0309 08:49:16.222170 130 log.go:172] (0xc000a3ec60) (0xc0009ae1e0) Create stream\nI0309 08:49:16.222207 130 log.go:172] (0xc000a3ec60) (0xc0009ae1e0) Stream added, broadcasting: 1\nI0309 08:49:16.223884 130 log.go:172] (0xc000a3ec60) Reply frame received for 1\nI0309 08:49:16.223907 130 log.go:172] (0xc000a3ec60) (0xc0008f6000) Create stream\nI0309 08:49:16.223914 130 log.go:172] (0xc000a3ec60) (0xc0008f6000) Stream added, broadcasting: 3\nI0309 08:49:16.224586 130 log.go:172] (0xc000a3ec60) Reply frame received for 3\nI0309 08:49:16.224607 130 log.go:172] (0xc000a3ec60) (0xc000ab8320) Create stream\nI0309 08:49:16.224617 130 log.go:172] (0xc000a3ec60) (0xc000ab8320) Stream added, broadcasting: 5\nI0309 08:49:16.225349 130 log.go:172] (0xc000a3ec60) Reply frame received for 5\nI0309 08:49:16.303156 130 log.go:172] (0xc000a3ec60) Data frame received for 5\nI0309 08:49:16.303326 130 log.go:172] (0xc000a3ec60) Data frame received for 3\nI0309 08:49:16.303354 130 log.go:172] (0xc0008f6000) (3) Data frame handling\nI0309 08:49:16.303388 130 log.go:172] (0xc0008f6000) (3) Data frame sent\nI0309 08:49:16.303400 130 log.go:172] (0xc000a3ec60) Data frame received for 3\nI0309 08:49:16.303408 130 log.go:172] (0xc0008f6000) (3) Data frame handling\nI0309 08:49:16.303457 130 log.go:172] (0xc000ab8320) (5) Data frame handling\nI0309 08:49:16.303480 130 log.go:172] (0xc000ab8320) (5) Data frame sent\nI0309 08:49:16.303495 130 log.go:172] (0xc000a3ec60) Data frame received for 5\nI0309 08:49:16.303507 130 log.go:172] (0xc000ab8320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 08:49:16.305139 130 log.go:172] (0xc000a3ec60) Data frame received for 1\nI0309 08:49:16.305148 130 log.go:172] (0xc0009ae1e0) (1) Data frame handling\nI0309 08:49:16.305154 130 log.go:172] (0xc0009ae1e0) (1) Data frame sent\nI0309 08:49:16.305161 130 log.go:172] (0xc000a3ec60) (0xc0009ae1e0) Stream removed, broadcasting: 1\nI0309 08:49:16.305333 130 log.go:172] (0xc000a3ec60) (0xc0009ae1e0) Stream removed, broadcasting: 1\nI0309 08:49:16.305341 130 log.go:172] (0xc000a3ec60) (0xc0008f6000) Stream removed, broadcasting: 3\nI0309 08:49:16.305392 130 log.go:172] (0xc000a3ec60) Go away received\nI0309 08:49:16.305447 130 log.go:172] (0xc000a3ec60) (0xc000ab8320) Stream removed, broadcasting: 5\n" Mar 9 08:49:16.307: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 08:49:16.307: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 08:49:26.323: INFO: Waiting for StatefulSet statefulset-1045/ss2 to complete update Mar 9 08:49:26.323: INFO: Waiting for Pod statefulset-1045/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 9 08:49:36.328: INFO: Deleting all statefulset in ns statefulset-1045 Mar 9 08:49:36.330: INFO: Scaling statefulset ss2 to 0 Mar 9 08:50:06.346: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 08:50:06.348: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:50:06.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1045" for this suite. • [SLOW TEST:131.293 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":22,"skipped":394,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:50:06.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 9 08:50:06.956: INFO: created pod pod-service-account-defaultsa Mar 9 08:50:06.956: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 9 08:50:06.965: INFO: created pod pod-service-account-mountsa Mar 9 08:50:06.965: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 9 08:50:06.985: INFO: created pod pod-service-account-nomountsa Mar 9 08:50:06.986: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 9 08:50:06.996: INFO: created pod pod-service-account-defaultsa-mountspec Mar 9 08:50:06.996: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 9 08:50:07.018: INFO: created pod pod-service-account-mountsa-mountspec Mar 9 08:50:07.018: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 9 08:50:07.089: INFO: created pod pod-service-account-nomountsa-mountspec Mar 9 08:50:07.089: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 9 08:50:07.094: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 9 08:50:07.094: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 9 08:50:07.104: INFO: created pod pod-service-account-mountsa-nomountspec Mar 9 08:50:07.104: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 9 08:50:07.163: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 9 08:50:07.163: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:50:07.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3049" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":23,"skipped":407,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:50:07.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 08:50:07.489: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 9 08:50:12.492: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 9 08:50:12.493: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 9 08:50:12.541: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6062 /apis/apps/v1/namespaces/deployment-6062/deployments/test-cleanup-deployment cec34b75-d738-4b7c-a445-6e4e5219c4e1 257904 1 2020-03-09 08:50:12 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000ef7ff8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 9 08:50:12.550: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-6062 /apis/apps/v1/namespaces/deployment-6062/replicasets/test-cleanup-deployment-55ffc6b7b6 9377e315-9f86-4cb0-95cf-ed8f70aa76cc 257907 1 2020-03-09 08:50:12 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment cec34b75-d738-4b7c-a445-6e4e5219c4e1 0xc000a178f7 0xc000a178f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a17a48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 9 08:50:12.550: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 9 08:50:12.550: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6062 /apis/apps/v1/namespaces/deployment-6062/replicasets/test-cleanup-controller 52e346f2-7b33-497f-86cd-2a9f0b4a7aa7 257906 1 2020-03-09 08:50:07 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment cec34b75-d738-4b7c-a445-6e4e5219c4e1 0xc000a176b7 0xc000a176b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000a17748 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 9 08:50:12.574: INFO: Pod "test-cleanup-controller-4bbzs" is available: &Pod{ObjectMeta:{test-cleanup-controller-4bbzs test-cleanup-controller- deployment-6062 /api/v1/namespaces/deployment-6062/pods/test-cleanup-controller-4bbzs 0a1e735e-d90f-4a28-b6b8-ccd86ce703be 257820 0 2020-03-09 08:50:07 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 52e346f2-7b33-497f-86cd-2a9f0b4a7aa7 0xc001f0c637 0xc001f0c638}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plfww,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plfww,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plfww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:50:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:50:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:50:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:50:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.218,StartTime:2020-03-09 08:50:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 08:50:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://57d425852ea4cc8f4af699af2fa2c7e29bd2d9b52c3ff3512e8a4cb2cf07e307,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 08:50:12.574: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-6qtkv" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-6qtkv test-cleanup-deployment-55ffc6b7b6- deployment-6062 /api/v1/namespaces/deployment-6062/pods/test-cleanup-deployment-55ffc6b7b6-6qtkv 35ab0099-3da1-4fae-b159-7d9ce58fa2ae 257913 0 2020-03-09 08:50:12 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 9377e315-9f86-4cb0-95cf-ed8f70aa76cc 0xc001f0c9a7 0xc001f0c9a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plfww,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plfww,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plfww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:50:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:50:12.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6062" for this suite. • [SLOW TEST:5.377 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":24,"skipped":428,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:50:12.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 08:50:12.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6df645a2-b445-4bef-9d44-22aee4a8b548" in namespace "downward-api-3287" to be "success or failure" Mar 9 08:50:12.761: INFO: Pod "downwardapi-volume-6df645a2-b445-4bef-9d44-22aee4a8b548": Phase="Pending", Reason="", readiness=false. Elapsed: 16.095236ms Mar 9 08:50:14.765: INFO: Pod "downwardapi-volume-6df645a2-b445-4bef-9d44-22aee4a8b548": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020037412s Mar 9 08:50:16.769: INFO: Pod "downwardapi-volume-6df645a2-b445-4bef-9d44-22aee4a8b548": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023876701s STEP: Saw pod success Mar 9 08:50:16.769: INFO: Pod "downwardapi-volume-6df645a2-b445-4bef-9d44-22aee4a8b548" satisfied condition "success or failure" Mar 9 08:50:16.772: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6df645a2-b445-4bef-9d44-22aee4a8b548 container client-container: STEP: delete the pod Mar 9 08:50:16.813: INFO: Waiting for pod downwardapi-volume-6df645a2-b445-4bef-9d44-22aee4a8b548 to disappear Mar 9 08:50:16.818: INFO: Pod downwardapi-volume-6df645a2-b445-4bef-9d44-22aee4a8b548 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:50:16.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3287" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":442,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:50:16.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 9 08:50:16.903: INFO: Waiting up to 5m0s for pod "pod-a2d8ecbe-fdcb-4332-a770-34a782db8b41" in namespace "emptydir-3389" to be "success or failure" Mar 9 08:50:16.907: INFO: Pod "pod-a2d8ecbe-fdcb-4332-a770-34a782db8b41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118593ms Mar 9 08:50:18.912: INFO: Pod "pod-a2d8ecbe-fdcb-4332-a770-34a782db8b41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008345129s STEP: Saw pod success Mar 9 08:50:18.912: INFO: Pod "pod-a2d8ecbe-fdcb-4332-a770-34a782db8b41" satisfied condition "success or failure" Mar 9 08:50:18.915: INFO: Trying to get logs from node jerma-worker2 pod pod-a2d8ecbe-fdcb-4332-a770-34a782db8b41 container test-container: STEP: delete the pod Mar 9 08:50:18.969: INFO: Waiting for pod pod-a2d8ecbe-fdcb-4332-a770-34a782db8b41 to disappear Mar 9 08:50:18.973: INFO: Pod pod-a2d8ecbe-fdcb-4332-a770-34a782db8b41 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:50:18.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3389" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":447,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:50:18.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 9 08:50:23.142: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 9 08:50:23.151: INFO: Pod pod-with-prestop-exec-hook still exists Mar 9 08:50:25.152: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 9 08:50:25.156: INFO: Pod pod-with-prestop-exec-hook still exists Mar 9 08:50:27.152: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 9 08:50:27.156: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:50:27.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4901" for this suite. • [SLOW TEST:8.189 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":454,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:50:27.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-8jxh STEP: Creating a pod to test atomic-volume-subpath Mar 9 08:50:27.339: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8jxh" in namespace "subpath-7011" to be "success or failure" Mar 9 08:50:27.343: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163052ms Mar 9 08:50:29.346: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Running", Reason="", readiness=true. Elapsed: 2.007134423s Mar 9 08:50:31.350: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Running", Reason="", readiness=true. Elapsed: 4.010911261s Mar 9 08:50:33.353: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Running", Reason="", readiness=true. Elapsed: 6.014456889s Mar 9 08:50:35.356: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Running", Reason="", readiness=true. Elapsed: 8.01773342s Mar 9 08:50:37.372: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Running", Reason="", readiness=true. Elapsed: 10.033091116s Mar 9 08:50:39.382: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Running", Reason="", readiness=true. Elapsed: 12.04307133s Mar 9 08:50:41.385: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Running", Reason="", readiness=true. Elapsed: 14.04679371s Mar 9 08:50:43.389: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Running", Reason="", readiness=true. Elapsed: 16.05009132s Mar 9 08:50:45.406: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Running", Reason="", readiness=true. Elapsed: 18.067283795s Mar 9 08:50:47.430: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Running", Reason="", readiness=true. Elapsed: 20.091392142s Mar 9 08:50:49.460: INFO: Pod "pod-subpath-test-projected-8jxh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.121181831s STEP: Saw pod success Mar 9 08:50:49.460: INFO: Pod "pod-subpath-test-projected-8jxh" satisfied condition "success or failure" Mar 9 08:50:49.462: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-8jxh container test-container-subpath-projected-8jxh: STEP: delete the pod Mar 9 08:50:49.500: INFO: Waiting for pod pod-subpath-test-projected-8jxh to disappear Mar 9 08:50:49.507: INFO: Pod pod-subpath-test-projected-8jxh no longer exists STEP: Deleting pod pod-subpath-test-projected-8jxh Mar 9 08:50:49.507: INFO: Deleting pod "pod-subpath-test-projected-8jxh" in namespace "subpath-7011" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:50:49.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7011" for this suite. • [SLOW TEST:22.349 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":28,"skipped":455,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:50:49.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 08:50:49.600: INFO: Waiting up to 5m0s for pod "downwardapi-volume-beebb008-0adc-43fc-95c4-c44c1961dcf9" in namespace "projected-8365" to be "success or failure" Mar 9 08:50:49.609: INFO: Pod "downwardapi-volume-beebb008-0adc-43fc-95c4-c44c1961dcf9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.30125ms Mar 9 08:50:51.613: INFO: Pod "downwardapi-volume-beebb008-0adc-43fc-95c4-c44c1961dcf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01331671s STEP: Saw pod success Mar 9 08:50:51.613: INFO: Pod "downwardapi-volume-beebb008-0adc-43fc-95c4-c44c1961dcf9" satisfied condition "success or failure" Mar 9 08:50:51.616: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-beebb008-0adc-43fc-95c4-c44c1961dcf9 container client-container: STEP: delete the pod Mar 9 08:50:51.677: INFO: Waiting for pod downwardapi-volume-beebb008-0adc-43fc-95c4-c44c1961dcf9 to disappear Mar 9 08:50:51.687: INFO: Pod downwardapi-volume-beebb008-0adc-43fc-95c4-c44c1961dcf9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:50:51.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8365" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":492,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:50:51.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 08:50:52.327: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 9 08:50:54.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719340652, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719340652, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719340652, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719340652, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 08:50:57.380: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:51:09.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5519" for this suite. STEP: Destroying namespace "webhook-5519-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.934 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":30,"skipped":513,"failed":0} S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:51:09.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 08:51:09.671: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:51:13.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9143" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":514,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:51:13.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-7eda7e09-c09f-49ff-8abc-8575796f5ca5 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:51:13.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3435" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":32,"skipped":517,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:51:13.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 9 08:51:13.882: INFO: Waiting up to 5m0s for pod "downward-api-c5d0bcdc-c031-4b13-9f67-5f131490a816" in namespace "downward-api-6776" to be "success or failure" Mar 9 08:51:13.885: INFO: Pod "downward-api-c5d0bcdc-c031-4b13-9f67-5f131490a816": Phase="Pending", Reason="", readiness=false. Elapsed: 2.967423ms Mar 9 08:51:15.889: INFO: Pod "downward-api-c5d0bcdc-c031-4b13-9f67-5f131490a816": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007042049s Mar 9 08:51:17.893: INFO: Pod "downward-api-c5d0bcdc-c031-4b13-9f67-5f131490a816": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010759213s STEP: Saw pod success Mar 9 08:51:17.893: INFO: Pod "downward-api-c5d0bcdc-c031-4b13-9f67-5f131490a816" satisfied condition "success or failure" Mar 9 08:51:17.896: INFO: Trying to get logs from node jerma-worker2 pod downward-api-c5d0bcdc-c031-4b13-9f67-5f131490a816 container dapi-container: STEP: delete the pod Mar 9 08:51:17.953: INFO: Waiting for pod downward-api-c5d0bcdc-c031-4b13-9f67-5f131490a816 to disappear Mar 9 08:51:17.964: INFO: Pod downward-api-c5d0bcdc-c031-4b13-9f67-5f131490a816 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:51:17.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6776" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":528,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:51:17.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 08:51:18.078: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 9 08:51:18.085: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:18.106: INFO: Number of nodes with available pods: 0 Mar 9 08:51:18.106: INFO: Node jerma-worker is running more than one daemon pod Mar 9 08:51:19.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:19.160: INFO: Number of nodes with available pods: 0 Mar 9 08:51:19.160: INFO: Node jerma-worker is running more than one daemon pod Mar 9 08:51:20.110: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:20.113: INFO: Number of nodes with available pods: 2 Mar 9 08:51:20.113: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 9 08:51:20.176: INFO: Wrong image for pod: daemon-set-5l8tq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:20.176: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:20.197: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:21.201: INFO: Wrong image for pod: daemon-set-5l8tq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:21.201: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:21.205: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:22.201: INFO: Wrong image for pod: daemon-set-5l8tq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:22.201: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:22.205: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:24.461: INFO: Wrong image for pod: daemon-set-5l8tq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:24.461: INFO: Pod daemon-set-5l8tq is not available Mar 9 08:51:24.461: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:24.466: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:25.201: INFO: Pod daemon-set-hpxnc is not available Mar 9 08:51:25.201: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:25.205: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:26.202: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:26.205: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:27.201: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:27.201: INFO: Pod daemon-set-rddzk is not available Mar 9 08:51:27.205: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:28.202: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:28.202: INFO: Pod daemon-set-rddzk is not available Mar 9 08:51:28.207: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:29.202: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:29.202: INFO: Pod daemon-set-rddzk is not available Mar 9 08:51:29.206: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:30.202: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:30.202: INFO: Pod daemon-set-rddzk is not available Mar 9 08:51:30.206: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:31.202: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:31.202: INFO: Pod daemon-set-rddzk is not available Mar 9 08:51:31.206: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:32.201: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:32.201: INFO: Pod daemon-set-rddzk is not available Mar 9 08:51:32.205: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:33.202: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:33.202: INFO: Pod daemon-set-rddzk is not available Mar 9 08:51:33.206: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:34.202: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:34.202: INFO: Pod daemon-set-rddzk is not available Mar 9 08:51:34.206: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:35.202: INFO: Wrong image for pod: daemon-set-rddzk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 9 08:51:35.202: INFO: Pod daemon-set-rddzk is not available Mar 9 08:51:35.205: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:36.201: INFO: Pod daemon-set-qklgm is not available Mar 9 08:51:36.204: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 9 08:51:36.208: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:36.211: INFO: Number of nodes with available pods: 1 Mar 9 08:51:36.211: INFO: Node jerma-worker is running more than one daemon pod Mar 9 08:51:37.215: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:37.218: INFO: Number of nodes with available pods: 1 Mar 9 08:51:37.218: INFO: Node jerma-worker is running more than one daemon pod Mar 9 08:51:38.216: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 08:51:38.219: INFO: Number of nodes with available pods: 2 Mar 9 08:51:38.219: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1375, will wait for the garbage collector to delete the pods Mar 9 08:51:38.291: INFO: Deleting DaemonSet.extensions daemon-set took: 5.124001ms Mar 9 08:51:38.591: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.219109ms Mar 9 08:51:46.113: INFO: Number of nodes with available pods: 0 Mar 9 08:51:46.113: INFO: Number of running nodes: 0, number of available pods: 0 Mar 9 08:51:46.115: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1375/daemonsets","resourceVersion":"258584"},"items":null} Mar 9 08:51:46.117: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1375/pods","resourceVersion":"258584"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:51:46.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1375" for this suite. • [SLOW TEST:28.158 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":34,"skipped":549,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:51:46.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 08:51:46.194: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-5bda5fa0-c530-4347-af09-b642b2354a32" in namespace "security-context-test-8772" to be "success or failure" Mar 9 08:51:46.203: INFO: Pod "busybox-readonly-false-5bda5fa0-c530-4347-af09-b642b2354a32": Phase="Pending", Reason="", readiness=false. Elapsed: 8.520762ms Mar 9 08:51:48.206: INFO: Pod "busybox-readonly-false-5bda5fa0-c530-4347-af09-b642b2354a32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012048807s Mar 9 08:51:48.207: INFO: Pod "busybox-readonly-false-5bda5fa0-c530-4347-af09-b642b2354a32" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:51:48.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8772" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:51:48.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 9 08:51:48.270: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 9 08:51:55.317: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:51:55.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7077" for this suite. • [SLOW TEST:7.114 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":634,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:51:55.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 08:51:55.396: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.933053ms) Mar 9 08:51:55.419: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 23.10874ms) Mar 9 08:51:55.423: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.197962ms) Mar 9 08:51:55.426: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.985831ms) Mar 9 08:51:55.429: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.986652ms) Mar 9 08:51:55.432: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.173617ms) Mar 9 08:51:55.435: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.937398ms) Mar 9 08:51:55.438: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.7276ms) Mar 9 08:51:55.440: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.326021ms) Mar 9 08:51:55.443: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.687273ms) Mar 9 08:51:55.445: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.471436ms) Mar 9 08:51:55.448: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.578909ms) Mar 9 08:51:55.451: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.581806ms) Mar 9 08:51:55.453: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.383006ms) Mar 9 08:51:55.456: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.483867ms) Mar 9 08:51:55.458: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.406634ms) Mar 9 08:51:55.460: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.240416ms) Mar 9 08:51:55.463: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.515918ms) Mar 9 08:51:55.465: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.250985ms) Mar 9 08:51:55.468: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.650048ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:51:55.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5540" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":37,"skipped":636,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:51:55.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 08:51:55.588: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.465337ms) Mar 9 08:51:55.591: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.358313ms) Mar 9 08:51:55.594: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.146413ms) Mar 9 08:51:55.598: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.151415ms) Mar 9 08:51:55.601: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.062019ms) Mar 9 08:51:55.604: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.38383ms) Mar 9 08:51:55.607: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.160121ms) Mar 9 08:51:55.610: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.731605ms) Mar 9 08:51:55.613: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.39905ms) Mar 9 08:51:55.615: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.51893ms) Mar 9 08:51:55.618: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.725136ms) Mar 9 08:51:55.620: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.444529ms) Mar 9 08:51:55.623: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.424286ms) Mar 9 08:51:55.625: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.425832ms) Mar 9 08:51:55.628: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.736812ms) Mar 9 08:51:55.631: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.746303ms) Mar 9 08:51:55.633: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.34307ms) Mar 9 08:51:55.635: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.235898ms) Mar 9 08:51:55.638: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.830451ms) Mar 9 08:51:55.658: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 19.698646ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:51:55.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-136" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":38,"skipped":643,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:51:55.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 9 08:51:55.721: INFO: Waiting up to 5m0s for pod "client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c" in namespace "containers-7431" to be "success or failure" Mar 9 08:51:55.737: INFO: Pod "client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.162247ms Mar 9 08:51:57.741: INFO: Pod "client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019867769s STEP: Saw pod success Mar 9 08:51:57.741: INFO: Pod "client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c" satisfied condition "success or failure" Mar 9 08:51:57.744: INFO: Trying to get logs from node jerma-worker2 pod client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c container test-container: STEP: delete the pod Mar 9 08:51:57.763: INFO: Waiting for pod client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c to disappear Mar 9 08:51:57.767: INFO: Pod client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:51:57.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7431" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:51:57.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 9 08:51:57.847: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 9 08:51:57.859: INFO: Waiting for terminating namespaces to be deleted... Mar 9 08:51:57.864: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 9 08:51:57.870: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 08:51:57.870: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 08:51:57.870: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 08:51:57.870: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 08:51:57.871: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 9 08:51:57.875: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 08:51:57.875: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 08:51:57.875: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 08:51:57.875: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-328891b8-5a44-47e5-bc35-e0acd108361b 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-328891b8-5a44-47e5-bc35-e0acd108361b off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-328891b8-5a44-47e5-bc35-e0acd108361b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:57:04.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5463" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:306.281 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":40,"skipped":688,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:57:04.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0309 08:57:10.169869 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 9 08:57:10.169: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:57:10.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7956" for this suite. • [SLOW TEST:6.109 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":41,"skipped":704,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:57:10.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 9 08:57:10.237: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:57:14.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1120" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":42,"skipped":804,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:57:14.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:57:25.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5142" for this suite. • [SLOW TEST:11.203 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":43,"skipped":807,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:57:25.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 9 08:57:25.847: INFO: Waiting up to 5m0s for pod "pod-2796e969-f82a-44e3-9fcc-c9078a53f94a" in namespace "emptydir-4410" to be "success or failure" Mar 9 08:57:25.857: INFO: Pod "pod-2796e969-f82a-44e3-9fcc-c9078a53f94a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.866294ms Mar 9 08:57:27.861: INFO: Pod "pod-2796e969-f82a-44e3-9fcc-c9078a53f94a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013892168s STEP: Saw pod success Mar 9 08:57:27.861: INFO: Pod "pod-2796e969-f82a-44e3-9fcc-c9078a53f94a" satisfied condition "success or failure" Mar 9 08:57:27.864: INFO: Trying to get logs from node jerma-worker pod pod-2796e969-f82a-44e3-9fcc-c9078a53f94a container test-container: STEP: delete the pod Mar 9 08:57:27.901: INFO: Waiting for pod pod-2796e969-f82a-44e3-9fcc-c9078a53f94a to disappear Mar 9 08:57:27.919: INFO: Pod pod-2796e969-f82a-44e3-9fcc-c9078a53f94a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:57:27.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4410" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":824,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:57:27.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 9 08:57:27.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5432' Mar 9 08:57:28.120: INFO: stderr: "" Mar 9 08:57:28.120: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 9 08:57:33.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5432 -o json' Mar 9 08:57:33.274: INFO: stderr: "" Mar 9 08:57:33.274: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-09T08:57:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5432\",\n \"resourceVersion\": \"260050\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5432/pods/e2e-test-httpd-pod\",\n \"uid\": \"ab2acd42-2f61-47ca-93d7-adbcef10ada1\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-zxzl7\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-zxzl7\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-zxzl7\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-09T08:57:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-09T08:57:29Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-09T08:57:29Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-09T08:57:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b3ffdeb040777790e1b446594be301ff6aec38f73e4c734f26de68041b0e1297\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-09T08:57:29Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.227\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.227\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-09T08:57:28Z\"\n }\n}\n" STEP: replace the image in the pod Mar 9 08:57:33.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5432' Mar 9 08:57:33.557: INFO: stderr: "" Mar 9 08:57:33.557: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902 Mar 9 08:57:33.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5432' Mar 9 08:57:46.091: INFO: stderr: "" Mar 9 08:57:46.091: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:57:46.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5432" for this suite. • [SLOW TEST:18.175 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":45,"skipped":831,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:57:46.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 9 08:57:48.199: INFO: &Pod{ObjectMeta:{send-events-e8f04675-612d-4be3-949c-4eb4c0330c85 events-9454 /api/v1/namespaces/events-9454/pods/send-events-e8f04675-612d-4be3-949c-4eb4c0330c85 b6d4b647-e1d1-474d-927e-901447ebac2b 260141 0 2020-03-09 08:57:46 +0000 UTC map[name:foo time:153350709] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r7224,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r7224,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r7224,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:57:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:57:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:57:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:57:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.228,StartTime:2020-03-09 08:57:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 08:57:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://fec33c2a3aaf77437e2254130c9182fa738a3edce68519b0e38e3198edb78934,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 9 08:57:50.204: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 9 08:57:52.208: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:57:52.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9454" for this suite. • [SLOW TEST:6.134 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":46,"skipped":844,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:57:52.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4061 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 9 08:57:52.321: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 9 08:58:10.477: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.237:8080/dial?request=hostname&protocol=udp&host=10.244.2.229&port=8081&tries=1'] Namespace:pod-network-test-4061 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 08:58:10.477: INFO: >>> kubeConfig: /root/.kube/config I0309 08:58:10.509740 6 log.go:172] (0xc0016084d0) (0xc00221af00) Create stream I0309 08:58:10.509777 6 log.go:172] (0xc0016084d0) (0xc00221af00) Stream added, broadcasting: 1 I0309 08:58:10.512698 6 log.go:172] (0xc0016084d0) Reply frame received for 1 I0309 08:58:10.512744 6 log.go:172] (0xc0016084d0) (0xc00221afa0) Create stream I0309 08:58:10.512759 6 log.go:172] (0xc0016084d0) (0xc00221afa0) Stream added, broadcasting: 3 I0309 08:58:10.513745 6 log.go:172] (0xc0016084d0) Reply frame received for 3 I0309 08:58:10.513774 6 log.go:172] (0xc0016084d0) (0xc001e9ebe0) Create stream I0309 08:58:10.513785 6 log.go:172] (0xc0016084d0) (0xc001e9ebe0) Stream added, broadcasting: 5 I0309 08:58:10.514571 6 log.go:172] (0xc0016084d0) Reply frame received for 5 I0309 08:58:10.586012 6 log.go:172] (0xc0016084d0) Data frame received for 3 I0309 08:58:10.586039 6 log.go:172] (0xc00221afa0) (3) Data frame handling I0309 08:58:10.586057 6 log.go:172] (0xc00221afa0) (3) Data frame sent I0309 08:58:10.586550 6 log.go:172] (0xc0016084d0) Data frame received for 3 I0309 08:58:10.586619 6 log.go:172] (0xc00221afa0) (3) Data frame handling I0309 08:58:10.586701 6 log.go:172] (0xc0016084d0) Data frame received for 5 I0309 08:58:10.586715 6 log.go:172] (0xc001e9ebe0) (5) Data frame handling I0309 08:58:10.588244 6 log.go:172] (0xc0016084d0) Data frame received for 1 I0309 08:58:10.588261 6 log.go:172] (0xc00221af00) (1) Data frame handling I0309 08:58:10.588270 6 log.go:172] (0xc00221af00) (1) Data frame sent I0309 08:58:10.588283 6 log.go:172] (0xc0016084d0) (0xc00221af00) Stream removed, broadcasting: 1 I0309 08:58:10.588295 6 log.go:172] (0xc0016084d0) Go away received I0309 08:58:10.588521 6 log.go:172] (0xc0016084d0) (0xc00221af00) Stream removed, broadcasting: 1 I0309 08:58:10.588536 6 log.go:172] (0xc0016084d0) (0xc00221afa0) Stream removed, broadcasting: 3 I0309 08:58:10.588545 6 log.go:172] (0xc0016084d0) (0xc001e9ebe0) Stream removed, broadcasting: 5 Mar 9 08:58:10.588: INFO: Waiting for responses: map[] Mar 9 08:58:10.591: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.237:8080/dial?request=hostname&protocol=udp&host=10.244.1.236&port=8081&tries=1'] Namespace:pod-network-test-4061 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 08:58:10.591: INFO: >>> kubeConfig: /root/.kube/config I0309 08:58:10.617823 6 log.go:172] (0xc002c80420) (0xc001e9f040) Create stream I0309 08:58:10.617840 6 log.go:172] (0xc002c80420) (0xc001e9f040) Stream added, broadcasting: 1 I0309 08:58:10.620574 6 log.go:172] (0xc002c80420) Reply frame received for 1 I0309 08:58:10.620606 6 log.go:172] (0xc002c80420) (0xc0023006e0) Create stream I0309 08:58:10.620617 6 log.go:172] (0xc002c80420) (0xc0023006e0) Stream added, broadcasting: 3 I0309 08:58:10.621550 6 log.go:172] (0xc002c80420) Reply frame received for 3 I0309 08:58:10.621577 6 log.go:172] (0xc002c80420) (0xc0023008c0) Create stream I0309 08:58:10.621588 6 log.go:172] (0xc002c80420) (0xc0023008c0) Stream added, broadcasting: 5 I0309 08:58:10.622626 6 log.go:172] (0xc002c80420) Reply frame received for 5 I0309 08:58:10.692146 6 log.go:172] (0xc002c80420) Data frame received for 3 I0309 08:58:10.692229 6 log.go:172] (0xc0023006e0) (3) Data frame handling I0309 08:58:10.692320 6 log.go:172] (0xc0023006e0) (3) Data frame sent I0309 08:58:10.692633 6 log.go:172] (0xc002c80420) Data frame received for 5 I0309 08:58:10.692670 6 log.go:172] (0xc0023008c0) (5) Data frame handling I0309 08:58:10.692706 6 log.go:172] (0xc002c80420) Data frame received for 3 I0309 08:58:10.692726 6 log.go:172] (0xc0023006e0) (3) Data frame handling I0309 08:58:10.694342 6 log.go:172] (0xc002c80420) Data frame received for 1 I0309 08:58:10.694379 6 log.go:172] (0xc001e9f040) (1) Data frame handling I0309 08:58:10.694405 6 log.go:172] (0xc001e9f040) (1) Data frame sent I0309 08:58:10.694430 6 log.go:172] (0xc002c80420) (0xc001e9f040) Stream removed, broadcasting: 1 I0309 08:58:10.694480 6 log.go:172] (0xc002c80420) Go away received I0309 08:58:10.694515 6 log.go:172] (0xc002c80420) (0xc001e9f040) Stream removed, broadcasting: 1 I0309 08:58:10.694541 6 log.go:172] (0xc002c80420) (0xc0023006e0) Stream removed, broadcasting: 3 I0309 08:58:10.694554 6 log.go:172] (0xc002c80420) (0xc0023008c0) Stream removed, broadcasting: 5 Mar 9 08:58:10.694: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:58:10.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4061" for this suite. • [SLOW TEST:18.464 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":858,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:58:10.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 9 08:58:10.819: INFO: Waiting up to 5m0s for pod "var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344" in namespace "var-expansion-5571" to be "success or failure" Mar 9 08:58:10.849: INFO: Pod "var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344": Phase="Pending", Reason="", readiness=false. Elapsed: 30.672173ms Mar 9 08:58:12.853: INFO: Pod "var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034356947s STEP: Saw pod success Mar 9 08:58:12.853: INFO: Pod "var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344" satisfied condition "success or failure" Mar 9 08:58:12.856: INFO: Trying to get logs from node jerma-worker pod var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344 container dapi-container: STEP: delete the pod Mar 9 08:58:12.920: INFO: Waiting for pod var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344 to disappear Mar 9 08:58:12.925: INFO: Pod var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:58:12.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5571" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":868,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:58:12.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 9 08:58:12.981: INFO: Waiting up to 5m0s for pod "downward-api-0deb44e2-3303-4108-b152-95f640790583" in namespace "downward-api-8547" to be "success or failure" Mar 9 08:58:12.985: INFO: Pod "downward-api-0deb44e2-3303-4108-b152-95f640790583": Phase="Pending", Reason="", readiness=false. Elapsed: 3.300634ms Mar 9 08:58:14.988: INFO: Pod "downward-api-0deb44e2-3303-4108-b152-95f640790583": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007101468s STEP: Saw pod success Mar 9 08:58:14.988: INFO: Pod "downward-api-0deb44e2-3303-4108-b152-95f640790583" satisfied condition "success or failure" Mar 9 08:58:14.991: INFO: Trying to get logs from node jerma-worker pod downward-api-0deb44e2-3303-4108-b152-95f640790583 container dapi-container: STEP: delete the pod Mar 9 08:58:15.060: INFO: Waiting for pod downward-api-0deb44e2-3303-4108-b152-95f640790583 to disappear Mar 9 08:58:15.080: INFO: Pod downward-api-0deb44e2-3303-4108-b152-95f640790583 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:58:15.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8547" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":892,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:58:15.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6561.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6561.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6561.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6561.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6561.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6561.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 08:58:19.222: INFO: DNS probes using dns-6561/dns-test-dc57b1c0-4a57-491c-85e8-8720ca5257b0 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 08:58:19.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6561" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":50,"skipped":912,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 08:58:19.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 in namespace container-probe-3798 Mar 9 08:58:23.401: INFO: Started pod liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 in namespace container-probe-3798 STEP: checking the pod's current state and verifying that restartCount is present Mar 9 08:58:23.404: INFO: Initial restart count of pod liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is 0 Mar 9 08:58:39.437: INFO: Restart count of pod container-probe-3798/liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is now 1 (16.03297375s elapsed) Mar 9 08:58:59.478: INFO: Restart count of pod container-probe-3798/liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is now 2 (36.073880861s elapsed) Mar 9 08:59:19.520: INFO: Restart count of pod container-probe-3798/liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is now 3 (56.116112124s elapsed) Mar 9 08:59:39.563: INFO: Restart count of pod container-probe-3798/liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is now 4 (1m16.158633582s elapsed) Mar 9 09:00:53.737: INFO: Restart count of pod container-probe-3798/liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is now 5 (2m30.333103315s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:00:53.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3798" for this suite. • [SLOW TEST:154.455 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":920,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:00:53.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 9 09:00:53.918: INFO: Waiting up to 5m0s for pod "downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96" in namespace "downward-api-2845" to be "success or failure" Mar 9 09:00:53.979: INFO: Pod "downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96": Phase="Pending", Reason="", readiness=false. Elapsed: 61.442904ms Mar 9 09:00:55.984: INFO: Pod "downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066011704s STEP: Saw pod success Mar 9 09:00:55.984: INFO: Pod "downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96" satisfied condition "success or failure" Mar 9 09:00:55.986: INFO: Trying to get logs from node jerma-worker pod downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96 container dapi-container: STEP: delete the pod Mar 9 09:00:56.018: INFO: Waiting for pod downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96 to disappear Mar 9 09:00:56.027: INFO: Pod downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:00:56.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2845" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":938,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:00:56.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:01:22.153: INFO: Container started at 2020-03-09 09:00:57 +0000 UTC, pod became ready at 2020-03-09 09:01:21 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:01:22.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5270" for this suite. • [SLOW TEST:26.123 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":1066,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:01:22.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6729 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 9 09:01:22.240: INFO: Found 0 stateful pods, waiting for 3 Mar 9 09:01:32.245: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 9 09:01:32.245: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 9 09:01:32.245: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 9 09:01:32.273: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 9 09:01:42.306: INFO: Updating stateful set ss2 Mar 9 09:01:42.338: INFO: Waiting for Pod statefulset-6729/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 9 09:01:52.841: INFO: Found 2 stateful pods, waiting for 3 Mar 9 09:02:02.846: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 9 09:02:02.846: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 9 09:02:02.846: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 9 09:02:02.869: INFO: Updating stateful set ss2 Mar 9 09:02:02.887: INFO: Waiting for Pod statefulset-6729/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 9 09:02:12.902: INFO: Waiting for Pod statefulset-6729/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 9 09:02:22.912: INFO: Updating stateful set ss2 Mar 9 09:02:22.950: INFO: Waiting for StatefulSet statefulset-6729/ss2 to complete update Mar 9 09:02:22.950: INFO: Waiting for Pod statefulset-6729/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 9 09:02:32.977: INFO: Waiting for StatefulSet statefulset-6729/ss2 to complete update Mar 9 09:02:32.977: INFO: Waiting for Pod statefulset-6729/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 9 09:02:42.956: INFO: Deleting all statefulset in ns statefulset-6729 Mar 9 09:02:42.959: INFO: Scaling statefulset ss2 to 0 Mar 9 09:03:02.975: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 09:03:02.978: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:03:02.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6729" for this suite. • [SLOW TEST:100.837 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":54,"skipped":1074,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:03:02.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 09:03:07.175: INFO: DNS probes using dns-test-fc354793-cd7c-455e-9d3d-79389a1638a0 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 09:03:11.302: INFO: File wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 9 09:03:11.305: INFO: File jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 9 09:03:11.305: INFO: Lookups using dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 failed for: [wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local] Mar 9 09:03:16.308: INFO: File wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 9 09:03:16.310: INFO: File jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 9 09:03:16.310: INFO: Lookups using dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 failed for: [wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local] Mar 9 09:03:21.309: INFO: File wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 9 09:03:21.311: INFO: File jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 9 09:03:21.311: INFO: Lookups using dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 failed for: [wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local] Mar 9 09:03:26.308: INFO: File wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 9 09:03:26.310: INFO: File jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 9 09:03:26.310: INFO: Lookups using dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 failed for: [wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local] Mar 9 09:03:31.308: INFO: File wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 9 09:03:31.312: INFO: File jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 9 09:03:31.312: INFO: Lookups using dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 failed for: [wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local] Mar 9 09:03:36.311: INFO: DNS probes using dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 09:03:40.466: INFO: DNS probes using dns-test-e47a97f9-0d3d-4609-abc1-bc948d21c634 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:03:40.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3434" for this suite. • [SLOW TEST:37.592 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":55,"skipped":1117,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:03:40.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6565 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 9 09:03:40.680: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 9 09:03:58.802: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.242:8080/dial?request=hostname&protocol=http&host=10.244.2.241&port=8080&tries=1'] Namespace:pod-network-test-6565 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:03:58.802: INFO: >>> kubeConfig: /root/.kube/config I0309 09:03:58.837896 6 log.go:172] (0xc002c802c0) (0xc001f15900) Create stream I0309 09:03:58.837925 6 log.go:172] (0xc002c802c0) (0xc001f15900) Stream added, broadcasting: 1 I0309 09:03:58.839936 6 log.go:172] (0xc002c802c0) Reply frame received for 1 I0309 09:03:58.839974 6 log.go:172] (0xc002c802c0) (0xc001487860) Create stream I0309 09:03:58.839987 6 log.go:172] (0xc002c802c0) (0xc001487860) Stream added, broadcasting: 3 I0309 09:03:58.840768 6 log.go:172] (0xc002c802c0) Reply frame received for 3 I0309 09:03:58.840803 6 log.go:172] (0xc002c802c0) (0xc001e9e3c0) Create stream I0309 09:03:58.840816 6 log.go:172] (0xc002c802c0) (0xc001e9e3c0) Stream added, broadcasting: 5 I0309 09:03:58.841673 6 log.go:172] (0xc002c802c0) Reply frame received for 5 I0309 09:03:58.895811 6 log.go:172] (0xc002c802c0) Data frame received for 3 I0309 09:03:58.895831 6 log.go:172] (0xc001487860) (3) Data frame handling I0309 09:03:58.895846 6 log.go:172] (0xc001487860) (3) Data frame sent I0309 09:03:58.896312 6 log.go:172] (0xc002c802c0) Data frame received for 5 I0309 09:03:58.896337 6 log.go:172] (0xc001e9e3c0) (5) Data frame handling I0309 09:03:58.896541 6 log.go:172] (0xc002c802c0) Data frame received for 3 I0309 09:03:58.896560 6 log.go:172] (0xc001487860) (3) Data frame handling I0309 09:03:58.898059 6 log.go:172] (0xc002c802c0) Data frame received for 1 I0309 09:03:58.898085 6 log.go:172] (0xc001f15900) (1) Data frame handling I0309 09:03:58.898106 6 log.go:172] (0xc001f15900) (1) Data frame sent I0309 09:03:58.898330 6 log.go:172] (0xc002c802c0) (0xc001f15900) Stream removed, broadcasting: 1 I0309 09:03:58.898435 6 log.go:172] (0xc002c802c0) (0xc001f15900) Stream removed, broadcasting: 1 I0309 09:03:58.898453 6 log.go:172] (0xc002c802c0) (0xc001487860) Stream removed, broadcasting: 3 I0309 09:03:58.898475 6 log.go:172] (0xc002c802c0) Go away received I0309 09:03:58.898504 6 log.go:172] (0xc002c802c0) (0xc001e9e3c0) Stream removed, broadcasting: 5 Mar 9 09:03:58.898: INFO: Waiting for responses: map[] Mar 9 09:03:58.901: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.242:8080/dial?request=hostname&protocol=http&host=10.244.1.244&port=8080&tries=1'] Namespace:pod-network-test-6565 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:03:58.901: INFO: >>> kubeConfig: /root/.kube/config I0309 09:03:58.930569 6 log.go:172] (0xc001608370) (0xc000fe2aa0) Create stream I0309 09:03:58.930593 6 log.go:172] (0xc001608370) (0xc000fe2aa0) Stream added, broadcasting: 1 I0309 09:03:58.932517 6 log.go:172] (0xc001608370) Reply frame received for 1 I0309 09:03:58.932548 6 log.go:172] (0xc001608370) (0xc001f159a0) Create stream I0309 09:03:58.932559 6 log.go:172] (0xc001608370) (0xc001f159a0) Stream added, broadcasting: 3 I0309 09:03:58.933368 6 log.go:172] (0xc001608370) Reply frame received for 3 I0309 09:03:58.933399 6 log.go:172] (0xc001608370) (0xc001f15a40) Create stream I0309 09:03:58.933411 6 log.go:172] (0xc001608370) (0xc001f15a40) Stream added, broadcasting: 5 I0309 09:03:58.934250 6 log.go:172] (0xc001608370) Reply frame received for 5 I0309 09:03:59.022599 6 log.go:172] (0xc001608370) Data frame received for 3 I0309 09:03:59.022626 6 log.go:172] (0xc001f159a0) (3) Data frame handling I0309 09:03:59.022642 6 log.go:172] (0xc001f159a0) (3) Data frame sent I0309 09:03:59.023055 6 log.go:172] (0xc001608370) Data frame received for 3 I0309 09:03:59.023074 6 log.go:172] (0xc001f159a0) (3) Data frame handling I0309 09:03:59.023312 6 log.go:172] (0xc001608370) Data frame received for 5 I0309 09:03:59.023326 6 log.go:172] (0xc001f15a40) (5) Data frame handling I0309 09:03:59.024902 6 log.go:172] (0xc001608370) Data frame received for 1 I0309 09:03:59.024923 6 log.go:172] (0xc000fe2aa0) (1) Data frame handling I0309 09:03:59.024933 6 log.go:172] (0xc000fe2aa0) (1) Data frame sent I0309 09:03:59.024942 6 log.go:172] (0xc001608370) (0xc000fe2aa0) Stream removed, broadcasting: 1 I0309 09:03:59.024955 6 log.go:172] (0xc001608370) Go away received I0309 09:03:59.025140 6 log.go:172] (0xc001608370) (0xc000fe2aa0) Stream removed, broadcasting: 1 I0309 09:03:59.025171 6 log.go:172] (0xc001608370) (0xc001f159a0) Stream removed, broadcasting: 3 I0309 09:03:59.025185 6 log.go:172] (0xc001608370) (0xc001f15a40) Stream removed, broadcasting: 5 Mar 9 09:03:59.025: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:03:59.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6565" for this suite. • [SLOW TEST:18.442 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":1122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:03:59.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:03:59.098: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:03:59.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4334" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":57,"skipped":1157,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:03:59.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 9 09:03:59.792: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 9 09:04:10.446: INFO: >>> kubeConfig: /root/.kube/config Mar 9 09:04:13.393: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:04:22.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8376" for this suite. • [SLOW TEST:23.097 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":58,"skipped":1172,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:04:22.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:04:22.846: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15" in namespace "downward-api-8670" to be "success or failure" Mar 9 09:04:22.851: INFO: Pod "downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116587ms Mar 9 09:04:25.200: INFO: Pod "downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353371368s Mar 9 09:04:27.204: INFO: Pod "downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.357162553s STEP: Saw pod success Mar 9 09:04:27.204: INFO: Pod "downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15" satisfied condition "success or failure" Mar 9 09:04:27.206: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15 container client-container: STEP: delete the pod Mar 9 09:04:27.319: INFO: Waiting for pod downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15 to disappear Mar 9 09:04:27.324: INFO: Pod downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:04:27.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8670" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1187,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:04:27.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:04:28.172: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:04:31.212: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:04:31.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6315-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:04:32.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9555" for this suite. STEP: Destroying namespace "webhook-9555-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.229 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":60,"skipped":1202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:04:32.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-15ea68aa-89c8-4e36-a616-8cd86ef88787 in namespace container-probe-1670 Mar 9 09:04:34.615: INFO: Started pod test-webserver-15ea68aa-89c8-4e36-a616-8cd86ef88787 in namespace container-probe-1670 STEP: checking the pod's current state and verifying that restartCount is present Mar 9 09:04:34.621: INFO: Initial restart count of pod test-webserver-15ea68aa-89c8-4e36-a616-8cd86ef88787 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:08:35.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1670" for this suite. • [SLOW TEST:242.727 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1229,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:08:35.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 9 09:08:39.396: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 9 09:08:39.407: INFO: Pod pod-with-prestop-http-hook still exists Mar 9 09:08:41.408: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 9 09:08:41.411: INFO: Pod pod-with-prestop-http-hook still exists Mar 9 09:08:43.408: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 9 09:08:43.412: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:08:43.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2199" for this suite. • [SLOW TEST:8.153 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1231,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:08:43.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 9 09:08:51.575: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 9 09:08:51.584: INFO: Pod pod-with-poststart-exec-hook still exists Mar 9 09:08:53.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 9 09:08:53.588: INFO: Pod pod-with-poststart-exec-hook still exists Mar 9 09:08:55.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 9 09:08:55.588: INFO: Pod pod-with-poststart-exec-hook still exists Mar 9 09:08:57.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 9 09:08:57.589: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:08:57.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-515" for this suite. • [SLOW TEST:14.185 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1237,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:08:57.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 9 09:09:01.716: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9005 PodName:pod-sharedvolume-7fa73bcc-fc1f-4225-bec5-450e5cc8936c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:09:01.716: INFO: >>> kubeConfig: /root/.kube/config I0309 09:09:01.758714 6 log.go:172] (0xc0027bbd90) (0xc001e9e820) Create stream I0309 09:09:01.758751 6 log.go:172] (0xc0027bbd90) (0xc001e9e820) Stream added, broadcasting: 1 I0309 09:09:01.761074 6 log.go:172] (0xc0027bbd90) Reply frame received for 1 I0309 09:09:01.761118 6 log.go:172] (0xc0027bbd90) (0xc001e9e8c0) Create stream I0309 09:09:01.761133 6 log.go:172] (0xc0027bbd90) (0xc001e9e8c0) Stream added, broadcasting: 3 I0309 09:09:01.762135 6 log.go:172] (0xc0027bbd90) Reply frame received for 3 I0309 09:09:01.762173 6 log.go:172] (0xc0027bbd90) (0xc001e9e960) Create stream I0309 09:09:01.762187 6 log.go:172] (0xc0027bbd90) (0xc001e9e960) Stream added, broadcasting: 5 I0309 09:09:01.763199 6 log.go:172] (0xc0027bbd90) Reply frame received for 5 I0309 09:09:01.831003 6 log.go:172] (0xc0027bbd90) Data frame received for 5 I0309 09:09:01.831040 6 log.go:172] (0xc001e9e960) (5) Data frame handling I0309 09:09:01.831063 6 log.go:172] (0xc0027bbd90) Data frame received for 3 I0309 09:09:01.831077 6 log.go:172] (0xc001e9e8c0) (3) Data frame handling I0309 09:09:01.831092 6 log.go:172] (0xc001e9e8c0) (3) Data frame sent I0309 09:09:01.831106 6 log.go:172] (0xc0027bbd90) Data frame received for 3 I0309 09:09:01.831124 6 log.go:172] (0xc001e9e8c0) (3) Data frame handling I0309 09:09:01.832280 6 log.go:172] (0xc0027bbd90) Data frame received for 1 I0309 09:09:01.832304 6 log.go:172] (0xc001e9e820) (1) Data frame handling I0309 09:09:01.832325 6 log.go:172] (0xc001e9e820) (1) Data frame sent I0309 09:09:01.832343 6 log.go:172] (0xc0027bbd90) (0xc001e9e820) Stream removed, broadcasting: 1 I0309 09:09:01.832417 6 log.go:172] (0xc0027bbd90) (0xc001e9e820) Stream removed, broadcasting: 1 I0309 09:09:01.832436 6 log.go:172] (0xc0027bbd90) (0xc001e9e8c0) Stream removed, broadcasting: 3 I0309 09:09:01.832457 6 log.go:172] (0xc0027bbd90) Go away received I0309 09:09:01.832498 6 log.go:172] (0xc0027bbd90) (0xc001e9e960) Stream removed, broadcasting: 5 Mar 9 09:09:01.832: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:09:01.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9005" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":64,"skipped":1258,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:09:01.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:09:18.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3747" for this suite. • [SLOW TEST:16.262 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":65,"skipped":1269,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:09:18.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-ad45d318-bb8d-4497-9d42-4a46ec176c69 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ad45d318-bb8d-4497-9d42-4a46ec176c69 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:09:22.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-664" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1275,"failed":0} ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:09:22.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9389 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9389;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9389 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9389;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9389.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9389.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9389.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9389.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9389.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9389.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 60.145.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.145.60_udp@PTR;check="$$(dig +tcp +noall +answer +search 60.145.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.145.60_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9389 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9389;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9389 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9389;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9389.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9389.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9389.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9389.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9389.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9389.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9389.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 60.145.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.145.60_udp@PTR;check="$$(dig +tcp +noall +answer +search 60.145.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.145.60_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 09:09:26.445: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.448: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.451: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.454: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.457: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.460: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.463: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.465: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.483: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.486: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.488: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.490: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.493: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.495: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.498: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.500: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:26.514: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc] Mar 9 09:09:31.518: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.521: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.523: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.525: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.528: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.530: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.533: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.535: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.551: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.553: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.555: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.557: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.560: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.562: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.564: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.566: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:31.584: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc] Mar 9 09:09:36.522: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.525: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.529: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.532: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.536: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.538: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.540: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.543: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.564: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.567: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.570: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.573: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.576: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.579: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.582: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.586: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:36.599: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc] Mar 9 09:09:41.518: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.522: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.525: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.529: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.532: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.536: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.539: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.542: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.564: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.567: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.570: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.573: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.577: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.580: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.583: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.586: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:41.603: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc] Mar 9 09:09:46.518: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.522: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.524: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.527: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.529: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.532: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.535: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.539: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.559: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.562: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.564: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.573: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.591: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.594: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.597: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.600: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:46.615: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc] Mar 9 09:09:51.518: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.521: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.523: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.526: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.528: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.530: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.533: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.535: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.551: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.553: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.555: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.557: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.559: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.562: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.564: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.566: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76) Mar 9 09:09:51.579: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc] Mar 9 09:09:56.591: INFO: DNS probes using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:09:56.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9389" for this suite. • [SLOW TEST:34.592 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":67,"skipped":1275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:09:56.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 9 09:10:00.926: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 9 09:10:06.049: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:10:06.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6497" for this suite. • [SLOW TEST:9.234 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":68,"skipped":1298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:10:06.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0309 09:10:46.207142 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 9 09:10:46.207: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:10:46.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7414" for this suite. • [SLOW TEST:40.154 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":69,"skipped":1333,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:10:46.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:10:46.797: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:10:49.855: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:10:59.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-54" for this suite. STEP: Destroying namespace "webhook-54-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.870 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":70,"skipped":1335,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:11:00.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-d91fd1e0-4b8d-4cff-9f9f-ebe095a5a2f1 STEP: Creating a pod to test consume configMaps Mar 9 09:11:00.163: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf" in namespace "configmap-3070" to be "success or failure" Mar 9 09:11:00.167: INFO: Pod "pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227059ms Mar 9 09:11:02.171: INFO: Pod "pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007928477s STEP: Saw pod success Mar 9 09:11:02.171: INFO: Pod "pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf" satisfied condition "success or failure" Mar 9 09:11:02.174: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf container configmap-volume-test: STEP: delete the pod Mar 9 09:11:02.187: INFO: Waiting for pod pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf to disappear Mar 9 09:11:02.191: INFO: Pod pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:11:02.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3070" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1335,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:11:02.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 9 09:11:02.283: INFO: Waiting up to 5m0s for pod "pod-a9272fe7-460a-4ab8-9208-37e84df355f4" in namespace "emptydir-6225" to be "success or failure" Mar 9 09:11:02.308: INFO: Pod "pod-a9272fe7-460a-4ab8-9208-37e84df355f4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.43163ms Mar 9 09:11:04.312: INFO: Pod "pod-a9272fe7-460a-4ab8-9208-37e84df355f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.028573689s STEP: Saw pod success Mar 9 09:11:04.312: INFO: Pod "pod-a9272fe7-460a-4ab8-9208-37e84df355f4" satisfied condition "success or failure" Mar 9 09:11:04.315: INFO: Trying to get logs from node jerma-worker pod pod-a9272fe7-460a-4ab8-9208-37e84df355f4 container test-container: STEP: delete the pod Mar 9 09:11:04.374: INFO: Waiting for pod pod-a9272fe7-460a-4ab8-9208-37e84df355f4 to disappear Mar 9 09:11:04.382: INFO: Pod pod-a9272fe7-460a-4ab8-9208-37e84df355f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:11:04.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6225" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1339,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:11:04.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:11:05.100: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 9 09:11:07.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719341865, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719341865, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719341865, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719341865, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:11:10.147: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:11:10.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9858-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:11:11.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9207" for this suite. STEP: Destroying namespace "webhook-9207-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.031 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":73,"skipped":1358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:11:11.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-48c7e354-22d3-4291-9df7-bfe964336639 STEP: Creating a pod to test consume configMaps Mar 9 09:11:11.558: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb" in namespace "projected-8288" to be "success or failure" Mar 9 09:11:11.564: INFO: Pod "pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.775622ms Mar 9 09:11:13.593: INFO: Pod "pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035401998s Mar 9 09:11:15.612: INFO: Pod "pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054369159s STEP: Saw pod success Mar 9 09:11:15.612: INFO: Pod "pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb" satisfied condition "success or failure" Mar 9 09:11:15.615: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb container projected-configmap-volume-test: STEP: delete the pod Mar 9 09:11:15.750: INFO: Waiting for pod pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb to disappear Mar 9 09:11:15.752: INFO: Pod pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:11:15.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8288" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1381,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:11:15.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:11:16.466: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:11:19.508: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:11:19.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8448" for this suite. STEP: Destroying namespace "webhook-8448-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":75,"skipped":1395,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:11:19.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 9 09:11:19.755: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:11:24.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-822" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":76,"skipped":1401,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:11:24.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5229 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5229 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5229 Mar 9 09:11:24.348: INFO: Found 0 stateful pods, waiting for 1 Mar 9 09:11:34.352: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 9 09:11:34.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 09:11:36.365: INFO: stderr: "I0309 09:11:36.250225 245 log.go:172] (0xc0000f4f20) (0xc0005c9e00) Create stream\nI0309 09:11:36.250257 245 log.go:172] (0xc0000f4f20) (0xc0005c9e00) Stream added, broadcasting: 1\nI0309 09:11:36.252524 245 log.go:172] (0xc0000f4f20) Reply frame received for 1\nI0309 09:11:36.252559 245 log.go:172] (0xc0000f4f20) (0xc00056e640) Create stream\nI0309 09:11:36.252568 245 log.go:172] (0xc0000f4f20) (0xc00056e640) Stream added, broadcasting: 3\nI0309 09:11:36.253379 245 log.go:172] (0xc0000f4f20) Reply frame received for 3\nI0309 09:11:36.253410 245 log.go:172] (0xc0000f4f20) (0xc00051a6e0) Create stream\nI0309 09:11:36.253420 245 log.go:172] (0xc0000f4f20) (0xc00051a6e0) Stream added, broadcasting: 5\nI0309 09:11:36.254328 245 log.go:172] (0xc0000f4f20) Reply frame received for 5\nI0309 09:11:36.329570 245 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0309 09:11:36.329594 245 log.go:172] (0xc00051a6e0) (5) Data frame handling\nI0309 09:11:36.329606 245 log.go:172] (0xc00051a6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:11:36.359439 245 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0309 09:11:36.359463 245 log.go:172] (0xc00056e640) (3) Data frame handling\nI0309 09:11:36.359489 245 log.go:172] (0xc00056e640) (3) Data frame sent\nI0309 09:11:36.359864 245 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0309 09:11:36.359884 245 log.go:172] (0xc00056e640) (3) Data frame handling\nI0309 09:11:36.359915 245 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0309 09:11:36.359941 245 log.go:172] (0xc00051a6e0) (5) Data frame handling\nI0309 09:11:36.361755 245 log.go:172] (0xc0000f4f20) Data frame received for 1\nI0309 09:11:36.361772 245 log.go:172] (0xc0005c9e00) (1) Data frame handling\nI0309 09:11:36.361788 245 log.go:172] (0xc0005c9e00) (1) Data frame sent\nI0309 09:11:36.361799 245 log.go:172] (0xc0000f4f20) (0xc0005c9e00) Stream removed, broadcasting: 1\nI0309 09:11:36.361817 245 log.go:172] (0xc0000f4f20) Go away received\nI0309 09:11:36.362356 245 log.go:172] (0xc0000f4f20) (0xc0005c9e00) Stream removed, broadcasting: 1\nI0309 09:11:36.362374 245 log.go:172] (0xc0000f4f20) (0xc00056e640) Stream removed, broadcasting: 3\nI0309 09:11:36.362382 245 log.go:172] (0xc0000f4f20) (0xc00051a6e0) Stream removed, broadcasting: 5\n" Mar 9 09:11:36.365: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 09:11:36.365: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 09:11:36.369: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 9 09:11:46.373: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 9 09:11:46.373: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 09:11:46.385: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999358s Mar 9 09:11:47.389: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995465231s Mar 9 09:11:48.409: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991460979s Mar 9 09:11:49.413: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971293996s Mar 9 09:11:50.416: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.9675221s Mar 9 09:11:51.423: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.964529063s Mar 9 09:11:52.427: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.956989582s Mar 9 09:11:53.432: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.953182588s Mar 9 09:11:54.435: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.948665447s Mar 9 09:11:55.439: INFO: Verifying statefulset ss doesn't scale past 1 for another 945.097553ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5229 Mar 9 09:11:56.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 09:11:56.651: INFO: stderr: "I0309 09:11:56.589933 271 log.go:172] (0xc000505130) (0xc0008120a0) Create stream\nI0309 09:11:56.589990 271 log.go:172] (0xc000505130) (0xc0008120a0) Stream added, broadcasting: 1\nI0309 09:11:56.592579 271 log.go:172] (0xc000505130) Reply frame received for 1\nI0309 09:11:56.592610 271 log.go:172] (0xc000505130) (0xc00056fae0) Create stream\nI0309 09:11:56.592623 271 log.go:172] (0xc000505130) (0xc00056fae0) Stream added, broadcasting: 3\nI0309 09:11:56.593541 271 log.go:172] (0xc000505130) Reply frame received for 3\nI0309 09:11:56.593562 271 log.go:172] (0xc000505130) (0xc000812140) Create stream\nI0309 09:11:56.593570 271 log.go:172] (0xc000505130) (0xc000812140) Stream added, broadcasting: 5\nI0309 09:11:56.594637 271 log.go:172] (0xc000505130) Reply frame received for 5\nI0309 09:11:56.646084 271 log.go:172] (0xc000505130) Data frame received for 5\nI0309 09:11:56.646154 271 log.go:172] (0xc000812140) (5) Data frame handling\nI0309 09:11:56.646167 271 log.go:172] (0xc000812140) (5) Data frame sent\nI0309 09:11:56.646175 271 log.go:172] (0xc000505130) Data frame received for 5\nI0309 09:11:56.646181 271 log.go:172] (0xc000812140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 09:11:56.646197 271 log.go:172] (0xc000505130) Data frame received for 3\nI0309 09:11:56.646204 271 log.go:172] (0xc00056fae0) (3) Data frame handling\nI0309 09:11:56.646211 271 log.go:172] (0xc00056fae0) (3) Data frame sent\nI0309 09:11:56.646218 271 log.go:172] (0xc000505130) Data frame received for 3\nI0309 09:11:56.646225 271 log.go:172] (0xc00056fae0) (3) Data frame handling\nI0309 09:11:56.647322 271 log.go:172] (0xc000505130) Data frame received for 1\nI0309 09:11:56.647336 271 log.go:172] (0xc0008120a0) (1) Data frame handling\nI0309 09:11:56.647342 271 log.go:172] (0xc0008120a0) (1) Data frame sent\nI0309 09:11:56.647362 271 log.go:172] (0xc000505130) (0xc0008120a0) Stream removed, broadcasting: 1\nI0309 09:11:56.647378 271 log.go:172] (0xc000505130) Go away received\nI0309 09:11:56.647701 271 log.go:172] (0xc000505130) (0xc0008120a0) Stream removed, broadcasting: 1\nI0309 09:11:56.647718 271 log.go:172] (0xc000505130) (0xc00056fae0) Stream removed, broadcasting: 3\nI0309 09:11:56.647726 271 log.go:172] (0xc000505130) (0xc000812140) Stream removed, broadcasting: 5\n" Mar 9 09:11:56.651: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 09:11:56.651: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 09:11:56.655: INFO: Found 1 stateful pods, waiting for 3 Mar 9 09:12:06.659: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 9 09:12:06.659: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 9 09:12:06.659: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 9 09:12:06.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 09:12:06.882: INFO: stderr: "I0309 09:12:06.807084 292 log.go:172] (0xc00052a840) (0xc00068bcc0) Create stream\nI0309 09:12:06.807124 292 log.go:172] (0xc00052a840) (0xc00068bcc0) Stream added, broadcasting: 1\nI0309 09:12:06.812477 292 log.go:172] (0xc00052a840) Reply frame received for 1\nI0309 09:12:06.812516 292 log.go:172] (0xc00052a840) (0xc00041f400) Create stream\nI0309 09:12:06.812528 292 log.go:172] (0xc00052a840) (0xc00041f400) Stream added, broadcasting: 3\nI0309 09:12:06.814902 292 log.go:172] (0xc00052a840) Reply frame received for 3\nI0309 09:12:06.814925 292 log.go:172] (0xc00052a840) (0xc000510000) Create stream\nI0309 09:12:06.814932 292 log.go:172] (0xc00052a840) (0xc000510000) Stream added, broadcasting: 5\nI0309 09:12:06.815755 292 log.go:172] (0xc00052a840) Reply frame received for 5\nI0309 09:12:06.877113 292 log.go:172] (0xc00052a840) Data frame received for 5\nI0309 09:12:06.877138 292 log.go:172] (0xc000510000) (5) Data frame handling\nI0309 09:12:06.877148 292 log.go:172] (0xc000510000) (5) Data frame sent\nI0309 09:12:06.877159 292 log.go:172] (0xc00052a840) Data frame received for 5\nI0309 09:12:06.877165 292 log.go:172] (0xc000510000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:12:06.877185 292 log.go:172] (0xc00052a840) Data frame received for 3\nI0309 09:12:06.877191 292 log.go:172] (0xc00041f400) (3) Data frame handling\nI0309 09:12:06.877197 292 log.go:172] (0xc00041f400) (3) Data frame sent\nI0309 09:12:06.877203 292 log.go:172] (0xc00052a840) Data frame received for 3\nI0309 09:12:06.877208 292 log.go:172] (0xc00041f400) (3) Data frame handling\nI0309 09:12:06.878312 292 log.go:172] (0xc00052a840) Data frame received for 1\nI0309 09:12:06.878340 292 log.go:172] (0xc00068bcc0) (1) Data frame handling\nI0309 09:12:06.878356 292 log.go:172] (0xc00068bcc0) (1) Data frame sent\nI0309 09:12:06.878498 292 log.go:172] (0xc00052a840) (0xc00068bcc0) Stream removed, broadcasting: 1\nI0309 09:12:06.878519 292 log.go:172] (0xc00052a840) Go away received\nI0309 09:12:06.878995 292 log.go:172] (0xc00052a840) (0xc00068bcc0) Stream removed, broadcasting: 1\nI0309 09:12:06.879017 292 log.go:172] (0xc00052a840) (0xc00041f400) Stream removed, broadcasting: 3\nI0309 09:12:06.879025 292 log.go:172] (0xc00052a840) (0xc000510000) Stream removed, broadcasting: 5\n" Mar 9 09:12:06.882: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 09:12:06.882: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 09:12:06.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 09:12:07.097: INFO: stderr: "I0309 09:12:06.999192 312 log.go:172] (0xc00063e000) (0xc0007248c0) Create stream\nI0309 09:12:06.999235 312 log.go:172] (0xc00063e000) (0xc0007248c0) Stream added, broadcasting: 1\nI0309 09:12:07.001380 312 log.go:172] (0xc00063e000) Reply frame received for 1\nI0309 09:12:07.001423 312 log.go:172] (0xc00063e000) (0xc0001e2000) Create stream\nI0309 09:12:07.001435 312 log.go:172] (0xc00063e000) (0xc0001e2000) Stream added, broadcasting: 3\nI0309 09:12:07.002057 312 log.go:172] (0xc00063e000) Reply frame received for 3\nI0309 09:12:07.002083 312 log.go:172] (0xc00063e000) (0xc000724960) Create stream\nI0309 09:12:07.002090 312 log.go:172] (0xc00063e000) (0xc000724960) Stream added, broadcasting: 5\nI0309 09:12:07.002773 312 log.go:172] (0xc00063e000) Reply frame received for 5\nI0309 09:12:07.064369 312 log.go:172] (0xc00063e000) Data frame received for 5\nI0309 09:12:07.064393 312 log.go:172] (0xc000724960) (5) Data frame handling\nI0309 09:12:07.064407 312 log.go:172] (0xc000724960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:12:07.092942 312 log.go:172] (0xc00063e000) Data frame received for 5\nI0309 09:12:07.092970 312 log.go:172] (0xc000724960) (5) Data frame handling\nI0309 09:12:07.093238 312 log.go:172] (0xc00063e000) Data frame received for 3\nI0309 09:12:07.093258 312 log.go:172] (0xc0001e2000) (3) Data frame handling\nI0309 09:12:07.093273 312 log.go:172] (0xc0001e2000) (3) Data frame sent\nI0309 09:12:07.093293 312 log.go:172] (0xc00063e000) Data frame received for 3\nI0309 09:12:07.093299 312 log.go:172] (0xc0001e2000) (3) Data frame handling\nI0309 09:12:07.094613 312 log.go:172] (0xc00063e000) Data frame received for 1\nI0309 09:12:07.094627 312 log.go:172] (0xc0007248c0) (1) Data frame handling\nI0309 09:12:07.094639 312 log.go:172] (0xc0007248c0) (1) Data frame sent\nI0309 09:12:07.094647 312 log.go:172] (0xc00063e000) (0xc0007248c0) Stream removed, broadcasting: 1\nI0309 09:12:07.094703 312 log.go:172] (0xc00063e000) Go away received\nI0309 09:12:07.094868 312 log.go:172] (0xc00063e000) (0xc0007248c0) Stream removed, broadcasting: 1\nI0309 09:12:07.094880 312 log.go:172] (0xc00063e000) (0xc0001e2000) Stream removed, broadcasting: 3\nI0309 09:12:07.094886 312 log.go:172] (0xc00063e000) (0xc000724960) Stream removed, broadcasting: 5\n" Mar 9 09:12:07.097: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 09:12:07.097: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 09:12:07.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 09:12:07.306: INFO: stderr: "I0309 09:12:07.212789 332 log.go:172] (0xc0009b7760) (0xc000974820) Create stream\nI0309 09:12:07.212862 332 log.go:172] (0xc0009b7760) (0xc000974820) Stream added, broadcasting: 1\nI0309 09:12:07.216572 332 log.go:172] (0xc0009b7760) Reply frame received for 1\nI0309 09:12:07.216606 332 log.go:172] (0xc0009b7760) (0xc000606780) Create stream\nI0309 09:12:07.216614 332 log.go:172] (0xc0009b7760) (0xc000606780) Stream added, broadcasting: 3\nI0309 09:12:07.217098 332 log.go:172] (0xc0009b7760) Reply frame received for 3\nI0309 09:12:07.217125 332 log.go:172] (0xc0009b7760) (0xc000729540) Create stream\nI0309 09:12:07.217136 332 log.go:172] (0xc0009b7760) (0xc000729540) Stream added, broadcasting: 5\nI0309 09:12:07.217689 332 log.go:172] (0xc0009b7760) Reply frame received for 5\nI0309 09:12:07.276770 332 log.go:172] (0xc0009b7760) Data frame received for 5\nI0309 09:12:07.276787 332 log.go:172] (0xc000729540) (5) Data frame handling\nI0309 09:12:07.276796 332 log.go:172] (0xc000729540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:12:07.302408 332 log.go:172] (0xc0009b7760) Data frame received for 5\nI0309 09:12:07.302431 332 log.go:172] (0xc000729540) (5) Data frame handling\nI0309 09:12:07.302454 332 log.go:172] (0xc0009b7760) Data frame received for 3\nI0309 09:12:07.302477 332 log.go:172] (0xc000606780) (3) Data frame handling\nI0309 09:12:07.302494 332 log.go:172] (0xc000606780) (3) Data frame sent\nI0309 09:12:07.302502 332 log.go:172] (0xc0009b7760) Data frame received for 3\nI0309 09:12:07.302508 332 log.go:172] (0xc000606780) (3) Data frame handling\nI0309 09:12:07.303359 332 log.go:172] (0xc0009b7760) Data frame received for 1\nI0309 09:12:07.303371 332 log.go:172] (0xc000974820) (1) Data frame handling\nI0309 09:12:07.303385 332 log.go:172] (0xc000974820) (1) Data frame sent\nI0309 09:12:07.303395 332 log.go:172] (0xc0009b7760) (0xc000974820) Stream removed, broadcasting: 1\nI0309 09:12:07.303407 332 log.go:172] (0xc0009b7760) Go away received\nI0309 09:12:07.303721 332 log.go:172] (0xc0009b7760) (0xc000974820) Stream removed, broadcasting: 1\nI0309 09:12:07.303738 332 log.go:172] (0xc0009b7760) (0xc000606780) Stream removed, broadcasting: 3\nI0309 09:12:07.303747 332 log.go:172] (0xc0009b7760) (0xc000729540) Stream removed, broadcasting: 5\n" Mar 9 09:12:07.306: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 09:12:07.306: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 09:12:07.306: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 09:12:07.339: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Mar 9 09:12:17.346: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 9 09:12:17.346: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 9 09:12:17.346: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 9 09:12:17.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999929s Mar 9 09:12:18.373: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984059427s Mar 9 09:12:19.377: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979418705s Mar 9 09:12:20.381: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975385453s Mar 9 09:12:21.386: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.971223056s Mar 9 09:12:22.390: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.96692134s Mar 9 09:12:23.394: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.962880814s Mar 9 09:12:24.398: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.958792675s Mar 9 09:12:25.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954603446s Mar 9 09:12:26.405: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.34603ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5229 Mar 9 09:12:27.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 09:12:27.609: INFO: stderr: "I0309 09:12:27.541532 352 log.go:172] (0xc0009d60b0) (0xc0009ac000) Create stream\nI0309 09:12:27.541578 352 log.go:172] (0xc0009d60b0) (0xc0009ac000) Stream added, broadcasting: 1\nI0309 09:12:27.545471 352 log.go:172] (0xc0009d60b0) Reply frame received for 1\nI0309 09:12:27.545530 352 log.go:172] (0xc0009d60b0) (0xc0009e6000) Create stream\nI0309 09:12:27.545563 352 log.go:172] (0xc0009d60b0) (0xc0009e6000) Stream added, broadcasting: 3\nI0309 09:12:27.548656 352 log.go:172] (0xc0009d60b0) Reply frame received for 3\nI0309 09:12:27.548699 352 log.go:172] (0xc0009d60b0) (0xc000938000) Create stream\nI0309 09:12:27.548721 352 log.go:172] (0xc0009d60b0) (0xc000938000) Stream added, broadcasting: 5\nI0309 09:12:27.549752 352 log.go:172] (0xc0009d60b0) Reply frame received for 5\nI0309 09:12:27.605188 352 log.go:172] (0xc0009d60b0) Data frame received for 3\nI0309 09:12:27.605226 352 log.go:172] (0xc0009d60b0) Data frame received for 5\nI0309 09:12:27.605247 352 log.go:172] (0xc000938000) (5) Data frame handling\nI0309 09:12:27.605257 352 log.go:172] (0xc000938000) (5) Data frame sent\nI0309 09:12:27.605263 352 log.go:172] (0xc0009d60b0) Data frame received for 5\nI0309 09:12:27.605268 352 log.go:172] (0xc000938000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 09:12:27.605283 352 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0309 09:12:27.605290 352 log.go:172] (0xc0009e6000) (3) Data frame sent\nI0309 09:12:27.605295 352 log.go:172] (0xc0009d60b0) Data frame received for 3\nI0309 09:12:27.605303 352 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0309 09:12:27.606222 352 log.go:172] (0xc0009d60b0) Data frame received for 1\nI0309 09:12:27.606248 352 log.go:172] (0xc0009ac000) (1) Data frame handling\nI0309 09:12:27.606262 352 log.go:172] (0xc0009ac000) (1) Data frame sent\nI0309 09:12:27.606331 352 log.go:172] (0xc0009d60b0) (0xc0009ac000) Stream removed, broadcasting: 1\nI0309 09:12:27.606383 352 log.go:172] (0xc0009d60b0) Go away received\nI0309 09:12:27.606745 352 log.go:172] (0xc0009d60b0) (0xc0009ac000) Stream removed, broadcasting: 1\nI0309 09:12:27.606760 352 log.go:172] (0xc0009d60b0) (0xc0009e6000) Stream removed, broadcasting: 3\nI0309 09:12:27.606773 352 log.go:172] (0xc0009d60b0) (0xc000938000) Stream removed, broadcasting: 5\n" Mar 9 09:12:27.609: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 09:12:27.609: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 09:12:27.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 09:12:27.768: INFO: stderr: "I0309 09:12:27.708704 372 log.go:172] (0xc00091cb00) (0xc0006cdd60) Create stream\nI0309 09:12:27.708760 372 log.go:172] (0xc00091cb00) (0xc0006cdd60) Stream added, broadcasting: 1\nI0309 09:12:27.710861 372 log.go:172] (0xc00091cb00) Reply frame received for 1\nI0309 09:12:27.710894 372 log.go:172] (0xc00091cb00) (0xc0006cde00) Create stream\nI0309 09:12:27.710909 372 log.go:172] (0xc00091cb00) (0xc0006cde00) Stream added, broadcasting: 3\nI0309 09:12:27.711828 372 log.go:172] (0xc00091cb00) Reply frame received for 3\nI0309 09:12:27.711851 372 log.go:172] (0xc00091cb00) (0xc0006cdea0) Create stream\nI0309 09:12:27.711858 372 log.go:172] (0xc00091cb00) (0xc0006cdea0) Stream added, broadcasting: 5\nI0309 09:12:27.712650 372 log.go:172] (0xc00091cb00) Reply frame received for 5\nI0309 09:12:27.764702 372 log.go:172] (0xc00091cb00) Data frame received for 5\nI0309 09:12:27.764736 372 log.go:172] (0xc0006cdea0) (5) Data frame handling\nI0309 09:12:27.764749 372 log.go:172] (0xc0006cdea0) (5) Data frame sent\nI0309 09:12:27.764760 372 log.go:172] (0xc00091cb00) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 09:12:27.764774 372 log.go:172] (0xc00091cb00) Data frame received for 3\nI0309 09:12:27.764791 372 log.go:172] (0xc0006cde00) (3) Data frame handling\nI0309 09:12:27.764804 372 log.go:172] (0xc0006cde00) (3) Data frame sent\nI0309 09:12:27.764812 372 log.go:172] (0xc00091cb00) Data frame received for 3\nI0309 09:12:27.764827 372 log.go:172] (0xc0006cde00) (3) Data frame handling\nI0309 09:12:27.764859 372 log.go:172] (0xc0006cdea0) (5) Data frame handling\nI0309 09:12:27.765620 372 log.go:172] (0xc00091cb00) Data frame received for 1\nI0309 09:12:27.765659 372 log.go:172] (0xc0006cdd60) (1) Data frame handling\nI0309 09:12:27.765672 372 log.go:172] (0xc0006cdd60) (1) Data frame sent\nI0309 09:12:27.765691 372 log.go:172] (0xc00091cb00) (0xc0006cdd60) Stream removed, broadcasting: 1\nI0309 09:12:27.765707 372 log.go:172] (0xc00091cb00) Go away received\nI0309 09:12:27.765962 372 log.go:172] (0xc00091cb00) (0xc0006cdd60) Stream removed, broadcasting: 1\nI0309 09:12:27.765978 372 log.go:172] (0xc00091cb00) (0xc0006cde00) Stream removed, broadcasting: 3\nI0309 09:12:27.765985 372 log.go:172] (0xc00091cb00) (0xc0006cdea0) Stream removed, broadcasting: 5\n" Mar 9 09:12:27.768: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 09:12:27.768: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 09:12:27.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 09:12:27.929: INFO: stderr: "I0309 09:12:27.864857 392 log.go:172] (0xc000aea630) (0xc0008dc000) Create stream\nI0309 09:12:27.864898 392 log.go:172] (0xc000aea630) (0xc0008dc000) Stream added, broadcasting: 1\nI0309 09:12:27.866822 392 log.go:172] (0xc000aea630) Reply frame received for 1\nI0309 09:12:27.866845 392 log.go:172] (0xc000aea630) (0xc0006f5a40) Create stream\nI0309 09:12:27.866851 392 log.go:172] (0xc000aea630) (0xc0006f5a40) Stream added, broadcasting: 3\nI0309 09:12:27.867642 392 log.go:172] (0xc000aea630) Reply frame received for 3\nI0309 09:12:27.867676 392 log.go:172] (0xc000aea630) (0xc0008dc0a0) Create stream\nI0309 09:12:27.867688 392 log.go:172] (0xc000aea630) (0xc0008dc0a0) Stream added, broadcasting: 5\nI0309 09:12:27.868466 392 log.go:172] (0xc000aea630) Reply frame received for 5\nI0309 09:12:27.924684 392 log.go:172] (0xc000aea630) Data frame received for 5\nI0309 09:12:27.924718 392 log.go:172] (0xc0008dc0a0) (5) Data frame handling\nI0309 09:12:27.924728 392 log.go:172] (0xc0008dc0a0) (5) Data frame sent\nI0309 09:12:27.924735 392 log.go:172] (0xc000aea630) Data frame received for 5\nI0309 09:12:27.924741 392 log.go:172] (0xc0008dc0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 09:12:27.924758 392 log.go:172] (0xc000aea630) Data frame received for 3\nI0309 09:12:27.924767 392 log.go:172] (0xc0006f5a40) (3) Data frame handling\nI0309 09:12:27.924777 392 log.go:172] (0xc0006f5a40) (3) Data frame sent\nI0309 09:12:27.924784 392 log.go:172] (0xc000aea630) Data frame received for 3\nI0309 09:12:27.924789 392 log.go:172] (0xc0006f5a40) (3) Data frame handling\nI0309 09:12:27.925956 392 log.go:172] (0xc000aea630) Data frame received for 1\nI0309 09:12:27.925971 392 log.go:172] (0xc0008dc000) (1) Data frame handling\nI0309 09:12:27.925979 392 log.go:172] (0xc0008dc000) (1) Data frame sent\nI0309 09:12:27.925992 392 log.go:172] (0xc000aea630) (0xc0008dc000) Stream removed, broadcasting: 1\nI0309 09:12:27.926010 392 log.go:172] (0xc000aea630) Go away received\nI0309 09:12:27.926312 392 log.go:172] (0xc000aea630) (0xc0008dc000) Stream removed, broadcasting: 1\nI0309 09:12:27.926333 392 log.go:172] (0xc000aea630) (0xc0006f5a40) Stream removed, broadcasting: 3\nI0309 09:12:27.926339 392 log.go:172] (0xc000aea630) (0xc0008dc0a0) Stream removed, broadcasting: 5\n" Mar 9 09:12:27.929: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 09:12:27.929: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 09:12:27.929: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 9 09:12:37.950: INFO: Deleting all statefulset in ns statefulset-5229 Mar 9 09:12:37.953: INFO: Scaling statefulset ss to 0 Mar 9 09:12:37.962: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 09:12:37.965: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:12:38.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5229" for this suite. • [SLOW TEST:73.763 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":77,"skipped":1405,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:12:38.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 9 09:12:38.106: INFO: Waiting up to 5m0s for pod "pod-1537a0bf-dff2-4a7c-a630-26829f571261" in namespace "emptydir-1081" to be "success or failure" Mar 9 09:12:38.134: INFO: Pod "pod-1537a0bf-dff2-4a7c-a630-26829f571261": Phase="Pending", Reason="", readiness=false. Elapsed: 28.200787ms Mar 9 09:12:40.138: INFO: Pod "pod-1537a0bf-dff2-4a7c-a630-26829f571261": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032119571s Mar 9 09:12:42.142: INFO: Pod "pod-1537a0bf-dff2-4a7c-a630-26829f571261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035649932s STEP: Saw pod success Mar 9 09:12:42.142: INFO: Pod "pod-1537a0bf-dff2-4a7c-a630-26829f571261" satisfied condition "success or failure" Mar 9 09:12:42.144: INFO: Trying to get logs from node jerma-worker2 pod pod-1537a0bf-dff2-4a7c-a630-26829f571261 container test-container: STEP: delete the pod Mar 9 09:12:42.177: INFO: Waiting for pod pod-1537a0bf-dff2-4a7c-a630-26829f571261 to disappear Mar 9 09:12:42.183: INFO: Pod pod-1537a0bf-dff2-4a7c-a630-26829f571261 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:12:42.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1081" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:12:42.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1876.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1876.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.214.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.214.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.214.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.214.188_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1876.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1876.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.214.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.214.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.214.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.214.188_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 09:12:46.320: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:46.322: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:46.336: INFO: Unable to read jessie_udp@dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:46.340: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:46.342: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:46.358: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local] Mar 9 09:12:51.520: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:51.538: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:51.646: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:51.649: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:51.662: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local] Mar 9 09:12:56.368: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:56.371: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:56.395: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:56.397: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:12:56.416: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local] Mar 9 09:13:01.369: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:01.398: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:01.423: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:01.426: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:01.444: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local] Mar 9 09:13:06.368: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:06.370: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:06.393: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:06.395: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:06.412: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local] Mar 9 09:13:11.368: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:11.371: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:11.392: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:11.395: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c) Mar 9 09:13:11.409: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local] Mar 9 09:13:16.393: INFO: DNS probes using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:13:16.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1876" for this suite. • [SLOW TEST:34.397 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":79,"skipped":1440,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:13:16.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:13:17.236: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:13:20.301: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:13:20.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2815" for this suite. STEP: Destroying namespace "webhook-2815-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":80,"skipped":1445,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:13:20.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 9 09:13:20.470: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:13:36.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8040" for this suite. • [SLOW TEST:16.436 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":81,"skipped":1447,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:13:36.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 9 09:13:39.499: INFO: Successfully updated pod "annotationupdatef47691f8-4d9f-4ec3-a953-3ece2181d053" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:13:43.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3401" for this suite. • [SLOW TEST:6.672 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1449,"failed":0} [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:13:43.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 9 09:13:43.649: INFO: Waiting up to 5m0s for pod "client-containers-739d7648-ab21-4d63-91e8-b628c103eb09" in namespace "containers-16" to be "success or failure" Mar 9 09:13:43.664: INFO: Pod "client-containers-739d7648-ab21-4d63-91e8-b628c103eb09": Phase="Pending", Reason="", readiness=false. Elapsed: 15.514395ms Mar 9 09:13:45.668: INFO: Pod "client-containers-739d7648-ab21-4d63-91e8-b628c103eb09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019237396s STEP: Saw pod success Mar 9 09:13:45.668: INFO: Pod "client-containers-739d7648-ab21-4d63-91e8-b628c103eb09" satisfied condition "success or failure" Mar 9 09:13:45.670: INFO: Trying to get logs from node jerma-worker2 pod client-containers-739d7648-ab21-4d63-91e8-b628c103eb09 container test-container: STEP: delete the pod Mar 9 09:13:45.690: INFO: Waiting for pod client-containers-739d7648-ab21-4d63-91e8-b628c103eb09 to disappear Mar 9 09:13:45.706: INFO: Pod client-containers-739d7648-ab21-4d63-91e8-b628c103eb09 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:13:45.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-16" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1449,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:13:45.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:13:56.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7722" for this suite. • [SLOW TEST:11.179 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":84,"skipped":1464,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:13:56.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 9 09:13:58.981: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:13:59.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5036" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1478,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:13:59.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9029 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 9 09:13:59.085: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 9 09:14:23.200: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.8 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9029 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:14:23.200: INFO: >>> kubeConfig: /root/.kube/config I0309 09:14:23.238743 6 log.go:172] (0xc002c80210) (0xc001486aa0) Create stream I0309 09:14:23.238776 6 log.go:172] (0xc002c80210) (0xc001486aa0) Stream added, broadcasting: 1 I0309 09:14:23.246204 6 log.go:172] (0xc002c80210) Reply frame received for 1 I0309 09:14:23.246252 6 log.go:172] (0xc002c80210) (0xc000d31860) Create stream I0309 09:14:23.246281 6 log.go:172] (0xc002c80210) (0xc000d31860) Stream added, broadcasting: 3 I0309 09:14:23.247639 6 log.go:172] (0xc002c80210) Reply frame received for 3 I0309 09:14:23.247691 6 log.go:172] (0xc002c80210) (0xc0023cc3c0) Create stream I0309 09:14:23.247706 6 log.go:172] (0xc002c80210) (0xc0023cc3c0) Stream added, broadcasting: 5 I0309 09:14:23.248855 6 log.go:172] (0xc002c80210) Reply frame received for 5 I0309 09:14:24.318289 6 log.go:172] (0xc002c80210) Data frame received for 5 I0309 09:14:24.318332 6 log.go:172] (0xc0023cc3c0) (5) Data frame handling I0309 09:14:24.318353 6 log.go:172] (0xc002c80210) Data frame received for 3 I0309 09:14:24.318367 6 log.go:172] (0xc000d31860) (3) Data frame handling I0309 09:14:24.318382 6 log.go:172] (0xc000d31860) (3) Data frame sent I0309 09:14:24.318412 6 log.go:172] (0xc002c80210) Data frame received for 3 I0309 09:14:24.318428 6 log.go:172] (0xc000d31860) (3) Data frame handling I0309 09:14:24.320076 6 log.go:172] (0xc002c80210) Data frame received for 1 I0309 09:14:24.320098 6 log.go:172] (0xc001486aa0) (1) Data frame handling I0309 09:14:24.320124 6 log.go:172] (0xc001486aa0) (1) Data frame sent I0309 09:14:24.320141 6 log.go:172] (0xc002c80210) (0xc001486aa0) Stream removed, broadcasting: 1 I0309 09:14:24.320171 6 log.go:172] (0xc002c80210) Go away received I0309 09:14:24.320234 6 log.go:172] (0xc002c80210) (0xc001486aa0) Stream removed, broadcasting: 1 I0309 09:14:24.320250 6 log.go:172] (0xc002c80210) (0xc000d31860) Stream removed, broadcasting: 3 I0309 09:14:24.320263 6 log.go:172] (0xc002c80210) (0xc0023cc3c0) Stream removed, broadcasting: 5 Mar 9 09:14:24.320: INFO: Found all expected endpoints: [netserver-0] Mar 9 09:14:24.323: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.13 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9029 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:14:24.323: INFO: >>> kubeConfig: /root/.kube/config I0309 09:14:24.356479 6 log.go:172] (0xc002c808f0) (0xc001487220) Create stream I0309 09:14:24.356509 6 log.go:172] (0xc002c808f0) (0xc001487220) Stream added, broadcasting: 1 I0309 09:14:24.358266 6 log.go:172] (0xc002c808f0) Reply frame received for 1 I0309 09:14:24.358309 6 log.go:172] (0xc002c808f0) (0xc0028620a0) Create stream I0309 09:14:24.358324 6 log.go:172] (0xc002c808f0) (0xc0028620a0) Stream added, broadcasting: 3 I0309 09:14:24.359260 6 log.go:172] (0xc002c808f0) Reply frame received for 3 I0309 09:14:24.359288 6 log.go:172] (0xc002c808f0) (0xc0014872c0) Create stream I0309 09:14:24.359302 6 log.go:172] (0xc002c808f0) (0xc0014872c0) Stream added, broadcasting: 5 I0309 09:14:24.360191 6 log.go:172] (0xc002c808f0) Reply frame received for 5 I0309 09:14:25.409956 6 log.go:172] (0xc002c808f0) Data frame received for 3 I0309 09:14:25.409996 6 log.go:172] (0xc0028620a0) (3) Data frame handling I0309 09:14:25.410011 6 log.go:172] (0xc0028620a0) (3) Data frame sent I0309 09:14:25.410027 6 log.go:172] (0xc002c808f0) Data frame received for 3 I0309 09:14:25.410037 6 log.go:172] (0xc0028620a0) (3) Data frame handling I0309 09:14:25.410086 6 log.go:172] (0xc002c808f0) Data frame received for 5 I0309 09:14:25.410110 6 log.go:172] (0xc0014872c0) (5) Data frame handling I0309 09:14:25.411784 6 log.go:172] (0xc002c808f0) Data frame received for 1 I0309 09:14:25.411810 6 log.go:172] (0xc001487220) (1) Data frame handling I0309 09:14:25.411842 6 log.go:172] (0xc001487220) (1) Data frame sent I0309 09:14:25.411862 6 log.go:172] (0xc002c808f0) (0xc001487220) Stream removed, broadcasting: 1 I0309 09:14:25.411884 6 log.go:172] (0xc002c808f0) Go away received I0309 09:14:25.411993 6 log.go:172] (0xc002c808f0) (0xc001487220) Stream removed, broadcasting: 1 I0309 09:14:25.412015 6 log.go:172] (0xc002c808f0) (0xc0028620a0) Stream removed, broadcasting: 3 I0309 09:14:25.412025 6 log.go:172] (0xc002c808f0) (0xc0014872c0) Stream removed, broadcasting: 5 Mar 9 09:14:25.412: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:14:25.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9029" for this suite. • [SLOW TEST:26.400 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1483,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:14:25.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 9 09:14:25.488: INFO: Waiting up to 5m0s for pod "pod-5bce757a-dfca-4cf7-a456-eebe6b620b77" in namespace "emptydir-1567" to be "success or failure" Mar 9 09:14:25.504: INFO: Pod "pod-5bce757a-dfca-4cf7-a456-eebe6b620b77": Phase="Pending", Reason="", readiness=false. Elapsed: 15.459984ms Mar 9 09:14:27.508: INFO: Pod "pod-5bce757a-dfca-4cf7-a456-eebe6b620b77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019631189s Mar 9 09:14:29.512: INFO: Pod "pod-5bce757a-dfca-4cf7-a456-eebe6b620b77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02392748s STEP: Saw pod success Mar 9 09:14:29.512: INFO: Pod "pod-5bce757a-dfca-4cf7-a456-eebe6b620b77" satisfied condition "success or failure" Mar 9 09:14:29.515: INFO: Trying to get logs from node jerma-worker2 pod pod-5bce757a-dfca-4cf7-a456-eebe6b620b77 container test-container: STEP: delete the pod Mar 9 09:14:29.541: INFO: Waiting for pod pod-5bce757a-dfca-4cf7-a456-eebe6b620b77 to disappear Mar 9 09:14:29.545: INFO: Pod pod-5bce757a-dfca-4cf7-a456-eebe6b620b77 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:14:29.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1567" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1488,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:14:29.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:14:45.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6150" for this suite. • [SLOW TEST:16.289 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":88,"skipped":1489,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:14:45.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-578f112a-828c-4c34-92f3-190e4c3f5eee STEP: Creating a pod to test consume secrets Mar 9 09:14:45.914: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e" in namespace "projected-4360" to be "success or failure" Mar 9 09:14:45.962: INFO: Pod "pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e": Phase="Pending", Reason="", readiness=false. Elapsed: 47.506422ms Mar 9 09:14:47.966: INFO: Pod "pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.051916602s STEP: Saw pod success Mar 9 09:14:47.966: INFO: Pod "pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e" satisfied condition "success or failure" Mar 9 09:14:47.969: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e container projected-secret-volume-test: STEP: delete the pod Mar 9 09:14:47.991: INFO: Waiting for pod pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e to disappear Mar 9 09:14:48.007: INFO: Pod pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:14:48.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4360" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1504,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:14:48.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:14:48.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0" in namespace "projected-4624" to be "success or failure" Mar 9 09:14:48.133: INFO: Pod "downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.207862ms Mar 9 09:14:50.137: INFO: Pod "downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014719037s STEP: Saw pod success Mar 9 09:14:50.137: INFO: Pod "downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0" satisfied condition "success or failure" Mar 9 09:14:50.139: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0 container client-container: STEP: delete the pod Mar 9 09:14:50.164: INFO: Waiting for pod downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0 to disappear Mar 9 09:14:50.198: INFO: Pod downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:14:50.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4624" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1504,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:14:50.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 9 09:14:50.266: INFO: PodSpec: initContainers in spec.initContainers Mar 9 09:15:40.739: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d5c7c6a2-7728-44db-924e-898d82de540e", GenerateName:"", Namespace:"init-container-4546", SelfLink:"/api/v1/namespaces/init-container-4546/pods/pod-init-d5c7c6a2-7728-44db-924e-898d82de540e", UID:"e4a6a1b7-9b41-44ac-991d-885e81f442a9", ResourceVersion:"265659", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719342090, loc:(*time.Location)(0x7d83a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"266036703"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8tvxk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0025a6000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8tvxk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8tvxk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8tvxk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002ad4068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0020c0240), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ad40f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ad4110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002ad4118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002ad411c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342090, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342090, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342090, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342090, loc:(*time.Location)(0x7d83a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.18", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.18"}}, StartTime:(*v1.Time)(0xc001cd20a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0010740e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001074150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://97e1c4a7337b8c7b2015ac56f5d34878b593c5aeddf07e7652a7b074ebaeea67", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001cd2180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001cd2100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002ad419f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:15:40.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4546" for this suite. • [SLOW TEST:50.575 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":91,"skipped":1507,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:15:40.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-1363 STEP: creating replication controller nodeport-test in namespace services-1363 I0309 09:15:40.907737 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1363, replica count: 2 I0309 09:15:43.958211 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 9 09:15:43.958: INFO: Creating new exec pod Mar 9 09:15:46.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1363 execpodqkk96 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 9 09:15:47.238: INFO: stderr: "I0309 09:15:47.159227 412 log.go:172] (0xc000592000) (0xc0006ba780) Create stream\nI0309 09:15:47.159285 412 log.go:172] (0xc000592000) (0xc0006ba780) Stream added, broadcasting: 1\nI0309 09:15:47.161919 412 log.go:172] (0xc000592000) Reply frame received for 1\nI0309 09:15:47.161955 412 log.go:172] (0xc000592000) (0xc0004b3540) Create stream\nI0309 09:15:47.161967 412 log.go:172] (0xc000592000) (0xc0004b3540) Stream added, broadcasting: 3\nI0309 09:15:47.163274 412 log.go:172] (0xc000592000) Reply frame received for 3\nI0309 09:15:47.163297 412 log.go:172] (0xc000592000) (0xc0004b35e0) Create stream\nI0309 09:15:47.163307 412 log.go:172] (0xc000592000) (0xc0004b35e0) Stream added, broadcasting: 5\nI0309 09:15:47.164628 412 log.go:172] (0xc000592000) Reply frame received for 5\nI0309 09:15:47.233056 412 log.go:172] (0xc000592000) Data frame received for 5\nI0309 09:15:47.233083 412 log.go:172] (0xc0004b35e0) (5) Data frame handling\nI0309 09:15:47.233097 412 log.go:172] (0xc0004b35e0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0309 09:15:47.233235 412 log.go:172] (0xc000592000) Data frame received for 5\nI0309 09:15:47.233247 412 log.go:172] (0xc0004b35e0) (5) Data frame handling\nI0309 09:15:47.233257 412 log.go:172] (0xc0004b35e0) (5) Data frame sent\nI0309 09:15:47.233263 412 log.go:172] (0xc000592000) Data frame received for 5\nI0309 09:15:47.233275 412 log.go:172] (0xc0004b35e0) (5) Data frame handling\nI0309 09:15:47.233284 412 log.go:172] (0xc000592000) Data frame received for 3\nI0309 09:15:47.233293 412 log.go:172] (0xc0004b3540) (3) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0309 09:15:47.235033 412 log.go:172] (0xc000592000) Data frame received for 1\nI0309 09:15:47.235066 412 log.go:172] (0xc0006ba780) (1) Data frame handling\nI0309 09:15:47.235079 412 log.go:172] (0xc0006ba780) (1) Data frame sent\nI0309 09:15:47.235098 412 log.go:172] (0xc000592000) (0xc0006ba780) Stream removed, broadcasting: 1\nI0309 09:15:47.235122 412 log.go:172] (0xc000592000) Go away received\nI0309 09:15:47.235418 412 log.go:172] (0xc000592000) (0xc0006ba780) Stream removed, broadcasting: 1\nI0309 09:15:47.235437 412 log.go:172] (0xc000592000) (0xc0004b3540) Stream removed, broadcasting: 3\nI0309 09:15:47.235445 412 log.go:172] (0xc000592000) (0xc0004b35e0) Stream removed, broadcasting: 5\n" Mar 9 09:15:47.238: INFO: stdout: "" Mar 9 09:15:47.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1363 execpodqkk96 -- /bin/sh -x -c nc -zv -t -w 2 10.106.66.156 80' Mar 9 09:15:47.422: INFO: stderr: "I0309 09:15:47.355220 432 log.go:172] (0xc000b72630) (0xc0008e8000) Create stream\nI0309 09:15:47.355267 432 log.go:172] (0xc000b72630) (0xc0008e8000) Stream added, broadcasting: 1\nI0309 09:15:47.356963 432 log.go:172] (0xc000b72630) Reply frame received for 1\nI0309 09:15:47.356998 432 log.go:172] (0xc000b72630) (0xc000711b80) Create stream\nI0309 09:15:47.357005 432 log.go:172] (0xc000b72630) (0xc000711b80) Stream added, broadcasting: 3\nI0309 09:15:47.357780 432 log.go:172] (0xc000b72630) Reply frame received for 3\nI0309 09:15:47.357838 432 log.go:172] (0xc000b72630) (0xc0008e80a0) Create stream\nI0309 09:15:47.357861 432 log.go:172] (0xc000b72630) (0xc0008e80a0) Stream added, broadcasting: 5\nI0309 09:15:47.358596 432 log.go:172] (0xc000b72630) Reply frame received for 5\nI0309 09:15:47.416212 432 log.go:172] (0xc000b72630) Data frame received for 5\nI0309 09:15:47.416242 432 log.go:172] (0xc0008e80a0) (5) Data frame handling\nI0309 09:15:47.416251 432 log.go:172] (0xc0008e80a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.66.156 80\nI0309 09:15:47.416302 432 log.go:172] (0xc000b72630) Data frame received for 5\nI0309 09:15:47.416309 432 log.go:172] (0xc0008e80a0) (5) Data frame handling\nI0309 09:15:47.416315 432 log.go:172] (0xc0008e80a0) (5) Data frame sent\nConnection to 10.106.66.156 80 port [tcp/http] succeeded!\nI0309 09:15:47.416623 432 log.go:172] (0xc000b72630) Data frame received for 5\nI0309 09:15:47.416638 432 log.go:172] (0xc0008e80a0) (5) Data frame handling\nI0309 09:15:47.417119 432 log.go:172] (0xc000b72630) Data frame received for 3\nI0309 09:15:47.417148 432 log.go:172] (0xc000711b80) (3) Data frame handling\nI0309 09:15:47.418337 432 log.go:172] (0xc000b72630) Data frame received for 1\nI0309 09:15:47.418388 432 log.go:172] (0xc0008e8000) (1) Data frame handling\nI0309 09:15:47.418412 432 log.go:172] (0xc0008e8000) (1) Data frame sent\nI0309 09:15:47.418428 432 log.go:172] (0xc000b72630) (0xc0008e8000) Stream removed, broadcasting: 1\nI0309 09:15:47.418442 432 log.go:172] (0xc000b72630) Go away received\nI0309 09:15:47.418748 432 log.go:172] (0xc000b72630) (0xc0008e8000) Stream removed, broadcasting: 1\nI0309 09:15:47.418765 432 log.go:172] (0xc000b72630) (0xc000711b80) Stream removed, broadcasting: 3\nI0309 09:15:47.418776 432 log.go:172] (0xc000b72630) (0xc0008e80a0) Stream removed, broadcasting: 5\n" Mar 9 09:15:47.422: INFO: stdout: "" Mar 9 09:15:47.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1363 execpodqkk96 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.4 31556' Mar 9 09:15:47.593: INFO: stderr: "I0309 09:15:47.523862 453 log.go:172] (0xc000ab13f0) (0xc000a82780) Create stream\nI0309 09:15:47.523902 453 log.go:172] (0xc000ab13f0) (0xc000a82780) Stream added, broadcasting: 1\nI0309 09:15:47.525140 453 log.go:172] (0xc000ab13f0) Reply frame received for 1\nI0309 09:15:47.525167 453 log.go:172] (0xc000ab13f0) (0xc0004f5400) Create stream\nI0309 09:15:47.525175 453 log.go:172] (0xc000ab13f0) (0xc0004f5400) Stream added, broadcasting: 3\nI0309 09:15:47.526084 453 log.go:172] (0xc000ab13f0) Reply frame received for 3\nI0309 09:15:47.526177 453 log.go:172] (0xc000ab13f0) (0xc000650640) Create stream\nI0309 09:15:47.526189 453 log.go:172] (0xc000ab13f0) (0xc000650640) Stream added, broadcasting: 5\nI0309 09:15:47.526799 453 log.go:172] (0xc000ab13f0) Reply frame received for 5\nI0309 09:15:47.588235 453 log.go:172] (0xc000ab13f0) Data frame received for 5\nI0309 09:15:47.588269 453 log.go:172] (0xc000650640) (5) Data frame handling\nI0309 09:15:47.588282 453 log.go:172] (0xc000650640) (5) Data frame sent\nI0309 09:15:47.588292 453 log.go:172] (0xc000ab13f0) Data frame received for 5\nI0309 09:15:47.588299 453 log.go:172] (0xc000650640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.4 31556\nConnection to 172.17.0.4 31556 port [tcp/31556] succeeded!\nI0309 09:15:47.588310 453 log.go:172] (0xc000ab13f0) Data frame received for 3\nI0309 09:15:47.588365 453 log.go:172] (0xc0004f5400) (3) Data frame handling\nI0309 09:15:47.588399 453 log.go:172] (0xc000650640) (5) Data frame sent\nI0309 09:15:47.588741 453 log.go:172] (0xc000ab13f0) Data frame received for 5\nI0309 09:15:47.588764 453 log.go:172] (0xc000650640) (5) Data frame handling\nI0309 09:15:47.590255 453 log.go:172] (0xc000ab13f0) Data frame received for 1\nI0309 09:15:47.590283 453 log.go:172] (0xc000a82780) (1) Data frame handling\nI0309 09:15:47.590293 453 log.go:172] (0xc000a82780) (1) Data frame sent\nI0309 09:15:47.590310 453 log.go:172] (0xc000ab13f0) (0xc000a82780) Stream removed, broadcasting: 1\nI0309 09:15:47.590328 453 log.go:172] (0xc000ab13f0) Go away received\nI0309 09:15:47.590638 453 log.go:172] (0xc000ab13f0) (0xc000a82780) Stream removed, broadcasting: 1\nI0309 09:15:47.590654 453 log.go:172] (0xc000ab13f0) (0xc0004f5400) Stream removed, broadcasting: 3\nI0309 09:15:47.590663 453 log.go:172] (0xc000ab13f0) (0xc000650640) Stream removed, broadcasting: 5\n" Mar 9 09:15:47.593: INFO: stdout: "" Mar 9 09:15:47.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1363 execpodqkk96 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.5 31556' Mar 9 09:15:47.785: INFO: stderr: "I0309 09:15:47.718515 473 log.go:172] (0xc00010ca50) (0xc000689cc0) Create stream\nI0309 09:15:47.718551 473 log.go:172] (0xc00010ca50) (0xc000689cc0) Stream added, broadcasting: 1\nI0309 09:15:47.720520 473 log.go:172] (0xc00010ca50) Reply frame received for 1\nI0309 09:15:47.720545 473 log.go:172] (0xc00010ca50) (0xc000a22000) Create stream\nI0309 09:15:47.720556 473 log.go:172] (0xc00010ca50) (0xc000a22000) Stream added, broadcasting: 3\nI0309 09:15:47.721202 473 log.go:172] (0xc00010ca50) Reply frame received for 3\nI0309 09:15:47.721236 473 log.go:172] (0xc00010ca50) (0xc000a220a0) Create stream\nI0309 09:15:47.721248 473 log.go:172] (0xc00010ca50) (0xc000a220a0) Stream added, broadcasting: 5\nI0309 09:15:47.722269 473 log.go:172] (0xc00010ca50) Reply frame received for 5\nI0309 09:15:47.780906 473 log.go:172] (0xc00010ca50) Data frame received for 5\nI0309 09:15:47.780940 473 log.go:172] (0xc000a220a0) (5) Data frame handling\nI0309 09:15:47.780952 473 log.go:172] (0xc000a220a0) (5) Data frame sent\nI0309 09:15:47.780961 473 log.go:172] (0xc00010ca50) Data frame received for 5\nI0309 09:15:47.780970 473 log.go:172] (0xc000a220a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.5 31556\nConnection to 172.17.0.5 31556 port [tcp/31556] succeeded!\nI0309 09:15:47.780992 473 log.go:172] (0xc00010ca50) Data frame received for 3\nI0309 09:15:47.781000 473 log.go:172] (0xc000a22000) (3) Data frame handling\nI0309 09:15:47.782597 473 log.go:172] (0xc00010ca50) Data frame received for 1\nI0309 09:15:47.782618 473 log.go:172] (0xc000689cc0) (1) Data frame handling\nI0309 09:15:47.782629 473 log.go:172] (0xc000689cc0) (1) Data frame sent\nI0309 09:15:47.782643 473 log.go:172] (0xc00010ca50) (0xc000689cc0) Stream removed, broadcasting: 1\nI0309 09:15:47.782661 473 log.go:172] (0xc00010ca50) Go away received\nI0309 09:15:47.782951 473 log.go:172] (0xc00010ca50) (0xc000689cc0) Stream removed, broadcasting: 1\nI0309 09:15:47.782968 473 log.go:172] (0xc00010ca50) (0xc000a22000) Stream removed, broadcasting: 3\nI0309 09:15:47.782975 473 log.go:172] (0xc00010ca50) (0xc000a220a0) Stream removed, broadcasting: 5\n" Mar 9 09:15:47.786: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:15:47.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1363" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.013 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":92,"skipped":1518,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:15:47.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 9 09:15:49.930: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:15:49.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1896" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:15:49.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 9 09:15:50.083: INFO: Waiting up to 5m0s for pod "var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d" in namespace "var-expansion-6898" to be "success or failure" Mar 9 09:15:50.087: INFO: Pod "var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233926ms Mar 9 09:15:52.091: INFO: Pod "var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008222992s STEP: Saw pod success Mar 9 09:15:52.091: INFO: Pod "var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d" satisfied condition "success or failure" Mar 9 09:15:52.094: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d container dapi-container: STEP: delete the pod Mar 9 09:15:52.139: INFO: Waiting for pod var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d to disappear Mar 9 09:15:52.147: INFO: Pod var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:15:52.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6898" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1551,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:15:52.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:15:52.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2" in namespace "projected-1246" to be "success or failure" Mar 9 09:15:52.238: INFO: Pod "downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447792ms Mar 9 09:15:54.250: INFO: Pod "downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013846333s STEP: Saw pod success Mar 9 09:15:54.250: INFO: Pod "downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2" satisfied condition "success or failure" Mar 9 09:15:54.253: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2 container client-container: STEP: delete the pod Mar 9 09:15:54.283: INFO: Waiting for pod downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2 to disappear Mar 9 09:15:54.291: INFO: Pod downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:15:54.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1246" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1582,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:15:54.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:15:54.388: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:15:55.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3414" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":96,"skipped":1591,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:15:55.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:15:55.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4815" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":97,"skipped":1604,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:15:55.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:16:55.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1091" for this suite. • [SLOW TEST:60.097 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1617,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:16:55.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:16:55.772: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 9 09:16:58.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6207 create -f -' Mar 9 09:17:00.538: INFO: stderr: "" Mar 9 09:17:00.538: INFO: stdout: "e2e-test-crd-publish-openapi-6406-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 9 09:17:00.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6207 delete e2e-test-crd-publish-openapi-6406-crds test-cr' Mar 9 09:17:00.641: INFO: stderr: "" Mar 9 09:17:00.642: INFO: stdout: "e2e-test-crd-publish-openapi-6406-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 9 09:17:00.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6207 apply -f -' Mar 9 09:17:00.909: INFO: stderr: "" Mar 9 09:17:00.909: INFO: stdout: "e2e-test-crd-publish-openapi-6406-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 9 09:17:00.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6207 delete e2e-test-crd-publish-openapi-6406-crds test-cr' Mar 9 09:17:00.994: INFO: stderr: "" Mar 9 09:17:00.994: INFO: stdout: "e2e-test-crd-publish-openapi-6406-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 9 09:17:00.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6406-crds' Mar 9 09:17:01.238: INFO: stderr: "" Mar 9 09:17:01.238: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6406-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:17:03.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6207" for this suite. • [SLOW TEST:8.302 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":99,"skipped":1660,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:17:04.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 9 09:17:04.077: INFO: Waiting up to 5m0s for pod "pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab" in namespace "emptydir-32" to be "success or failure" Mar 9 09:17:04.083: INFO: Pod "pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab": Phase="Pending", Reason="", readiness=false. Elapsed: 5.477627ms Mar 9 09:17:06.093: INFO: Pod "pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015913726s Mar 9 09:17:08.097: INFO: Pod "pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019895329s STEP: Saw pod success Mar 9 09:17:08.097: INFO: Pod "pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab" satisfied condition "success or failure" Mar 9 09:17:08.100: INFO: Trying to get logs from node jerma-worker2 pod pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab container test-container: STEP: delete the pod Mar 9 09:17:08.135: INFO: Waiting for pod pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab to disappear Mar 9 09:17:08.143: INFO: Pod pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:17:08.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-32" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:17:08.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7306 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-7306 Mar 9 09:17:08.210: INFO: Found 0 stateful pods, waiting for 1 Mar 9 09:17:18.214: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 9 09:17:18.238: INFO: Deleting all statefulset in ns statefulset-7306 Mar 9 09:17:18.244: INFO: Scaling statefulset ss to 0 Mar 9 09:17:38.322: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 09:17:38.325: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:17:38.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7306" for this suite. • [SLOW TEST:30.195 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":101,"skipped":1754,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:17:38.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4790 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 9 09:17:38.442: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 9 09:17:54.593: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.15:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4790 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:17:54.593: INFO: >>> kubeConfig: /root/.kube/config I0309 09:17:54.626512 6 log.go:172] (0xc001372420) (0xc000e8f220) Create stream I0309 09:17:54.626547 6 log.go:172] (0xc001372420) (0xc000e8f220) Stream added, broadcasting: 1 I0309 09:17:54.629069 6 log.go:172] (0xc001372420) Reply frame received for 1 I0309 09:17:54.629123 6 log.go:172] (0xc001372420) (0xc0014d6d20) Create stream I0309 09:17:54.629140 6 log.go:172] (0xc001372420) (0xc0014d6d20) Stream added, broadcasting: 3 I0309 09:17:54.630467 6 log.go:172] (0xc001372420) Reply frame received for 3 I0309 09:17:54.630517 6 log.go:172] (0xc001372420) (0xc001e9fc20) Create stream I0309 09:17:54.630532 6 log.go:172] (0xc001372420) (0xc001e9fc20) Stream added, broadcasting: 5 I0309 09:17:54.631703 6 log.go:172] (0xc001372420) Reply frame received for 5 I0309 09:17:54.704388 6 log.go:172] (0xc001372420) Data frame received for 3 I0309 09:17:54.704426 6 log.go:172] (0xc0014d6d20) (3) Data frame handling I0309 09:17:54.704438 6 log.go:172] (0xc0014d6d20) (3) Data frame sent I0309 09:17:54.704474 6 log.go:172] (0xc001372420) Data frame received for 5 I0309 09:17:54.704525 6 log.go:172] (0xc001e9fc20) (5) Data frame handling I0309 09:17:54.704570 6 log.go:172] (0xc001372420) Data frame received for 3 I0309 09:17:54.704596 6 log.go:172] (0xc0014d6d20) (3) Data frame handling I0309 09:17:54.706569 6 log.go:172] (0xc001372420) Data frame received for 1 I0309 09:17:54.706614 6 log.go:172] (0xc000e8f220) (1) Data frame handling I0309 09:17:54.706642 6 log.go:172] (0xc000e8f220) (1) Data frame sent I0309 09:17:54.706665 6 log.go:172] (0xc001372420) (0xc000e8f220) Stream removed, broadcasting: 1 I0309 09:17:54.706782 6 log.go:172] (0xc001372420) Go away received I0309 09:17:54.706858 6 log.go:172] (0xc001372420) (0xc000e8f220) Stream removed, broadcasting: 1 I0309 09:17:54.706893 6 log.go:172] (0xc001372420) (0xc0014d6d20) Stream removed, broadcasting: 3 I0309 09:17:54.706910 6 log.go:172] (0xc001372420) (0xc001e9fc20) Stream removed, broadcasting: 5 Mar 9 09:17:54.706: INFO: Found all expected endpoints: [netserver-0] Mar 9 09:17:54.710: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.25:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4790 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:17:54.710: INFO: >>> kubeConfig: /root/.kube/config I0309 09:17:54.743767 6 log.go:172] (0xc0016080b0) (0xc0014d7cc0) Create stream I0309 09:17:54.743794 6 log.go:172] (0xc0016080b0) (0xc0014d7cc0) Stream added, broadcasting: 1 I0309 09:17:54.745984 6 log.go:172] (0xc0016080b0) Reply frame received for 1 I0309 09:17:54.746031 6 log.go:172] (0xc0016080b0) (0xc0014863c0) Create stream I0309 09:17:54.746065 6 log.go:172] (0xc0016080b0) (0xc0014863c0) Stream added, broadcasting: 3 I0309 09:17:54.746944 6 log.go:172] (0xc0016080b0) Reply frame received for 3 I0309 09:17:54.746979 6 log.go:172] (0xc0016080b0) (0xc0014865a0) Create stream I0309 09:17:54.746990 6 log.go:172] (0xc0016080b0) (0xc0014865a0) Stream added, broadcasting: 5 I0309 09:17:54.747761 6 log.go:172] (0xc0016080b0) Reply frame received for 5 I0309 09:17:54.833719 6 log.go:172] (0xc0016080b0) Data frame received for 3 I0309 09:17:54.833750 6 log.go:172] (0xc0014863c0) (3) Data frame handling I0309 09:17:54.833770 6 log.go:172] (0xc0014863c0) (3) Data frame sent I0309 09:17:54.833781 6 log.go:172] (0xc0016080b0) Data frame received for 3 I0309 09:17:54.833791 6 log.go:172] (0xc0014863c0) (3) Data frame handling I0309 09:17:54.834268 6 log.go:172] (0xc0016080b0) Data frame received for 5 I0309 09:17:54.834307 6 log.go:172] (0xc0014865a0) (5) Data frame handling I0309 09:17:54.835713 6 log.go:172] (0xc0016080b0) Data frame received for 1 I0309 09:17:54.835744 6 log.go:172] (0xc0014d7cc0) (1) Data frame handling I0309 09:17:54.835769 6 log.go:172] (0xc0014d7cc0) (1) Data frame sent I0309 09:17:54.835788 6 log.go:172] (0xc0016080b0) (0xc0014d7cc0) Stream removed, broadcasting: 1 I0309 09:17:54.835806 6 log.go:172] (0xc0016080b0) Go away received I0309 09:17:54.835943 6 log.go:172] (0xc0016080b0) (0xc0014d7cc0) Stream removed, broadcasting: 1 I0309 09:17:54.835967 6 log.go:172] (0xc0016080b0) (0xc0014863c0) Stream removed, broadcasting: 3 I0309 09:17:54.835980 6 log.go:172] (0xc0016080b0) (0xc0014865a0) Stream removed, broadcasting: 5 Mar 9 09:17:54.835: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:17:54.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4790" for this suite. • [SLOW TEST:16.498 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1773,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:17:54.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 9 09:17:54.939: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 9 09:17:59.943: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:18:00.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3499" for this suite. • [SLOW TEST:6.124 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":103,"skipped":1805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:18:00.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-ad013205-c473-45d8-8e2f-8d7bc2a78176 STEP: Creating a pod to test consume secrets Mar 9 09:18:01.082: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007" in namespace "projected-3249" to be "success or failure" Mar 9 09:18:01.085: INFO: Pod "pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.62895ms Mar 9 09:18:03.089: INFO: Pod "pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007298809s STEP: Saw pod success Mar 9 09:18:03.089: INFO: Pod "pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007" satisfied condition "success or failure" Mar 9 09:18:03.091: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007 container secret-volume-test: STEP: delete the pod Mar 9 09:18:03.111: INFO: Waiting for pod pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007 to disappear Mar 9 09:18:03.115: INFO: Pod pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:18:03.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3249" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1876,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:18:03.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-de23d10c-a8ea-468f-8b1c-f64c971cd5e7 STEP: Creating a pod to test consume configMaps Mar 9 09:18:03.314: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6" in namespace "projected-1812" to be "success or failure" Mar 9 09:18:03.325: INFO: Pod "pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.460577ms Mar 9 09:18:05.328: INFO: Pod "pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013995481s STEP: Saw pod success Mar 9 09:18:05.328: INFO: Pod "pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6" satisfied condition "success or failure" Mar 9 09:18:05.332: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6 container projected-configmap-volume-test: STEP: delete the pod Mar 9 09:18:05.356: INFO: Waiting for pod pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6 to disappear Mar 9 09:18:05.411: INFO: Pod pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:18:05.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1812" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1880,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:18:05.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:18:06.084: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:18:09.144: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:18:09.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2036" for this suite. STEP: Destroying namespace "webhook-2036-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":106,"skipped":1883,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:18:09.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:18:09.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-935" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":107,"skipped":1902,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:18:09.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 9 09:18:09.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5068' Mar 9 09:18:09.892: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 9 09:18:09.892: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793 Mar 9 09:18:09.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5068' Mar 9 09:18:10.021: INFO: stderr: "" Mar 9 09:18:10.021: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:18:10.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5068" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":108,"skipped":1909,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:18:10.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 9 09:18:10.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5846' Mar 9 09:18:10.292: INFO: stderr: "" Mar 9 09:18:10.292: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 09:18:10.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846' Mar 9 09:18:10.416: INFO: stderr: "" Mar 9 09:18:10.416: INFO: stdout: "update-demo-nautilus-jnm45 update-demo-nautilus-sfcnt " Mar 9 09:18:10.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jnm45 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:10.509: INFO: stderr: "" Mar 9 09:18:10.509: INFO: stdout: "" Mar 9 09:18:10.509: INFO: update-demo-nautilus-jnm45 is created but not running Mar 9 09:18:15.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846' Mar 9 09:18:15.619: INFO: stderr: "" Mar 9 09:18:15.619: INFO: stdout: "update-demo-nautilus-jnm45 update-demo-nautilus-sfcnt " Mar 9 09:18:15.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jnm45 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:15.720: INFO: stderr: "" Mar 9 09:18:15.720: INFO: stdout: "true" Mar 9 09:18:15.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jnm45 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:15.816: INFO: stderr: "" Mar 9 09:18:15.816: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 09:18:15.816: INFO: validating pod update-demo-nautilus-jnm45 Mar 9 09:18:15.820: INFO: got data: { "image": "nautilus.jpg" } Mar 9 09:18:15.820: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 09:18:15.820: INFO: update-demo-nautilus-jnm45 is verified up and running Mar 9 09:18:15.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:15.890: INFO: stderr: "" Mar 9 09:18:15.890: INFO: stdout: "true" Mar 9 09:18:15.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:15.981: INFO: stderr: "" Mar 9 09:18:15.981: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 09:18:15.982: INFO: validating pod update-demo-nautilus-sfcnt Mar 9 09:18:15.985: INFO: got data: { "image": "nautilus.jpg" } Mar 9 09:18:15.985: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 09:18:15.985: INFO: update-demo-nautilus-sfcnt is verified up and running STEP: scaling down the replication controller Mar 9 09:18:15.988: INFO: scanned /root for discovery docs: Mar 9 09:18:15.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5846' Mar 9 09:18:17.137: INFO: stderr: "" Mar 9 09:18:17.137: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 09:18:17.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846' Mar 9 09:18:17.253: INFO: stderr: "" Mar 9 09:18:17.253: INFO: stdout: "update-demo-nautilus-jnm45 update-demo-nautilus-sfcnt " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 9 09:18:22.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846' Mar 9 09:18:22.378: INFO: stderr: "" Mar 9 09:18:22.379: INFO: stdout: "update-demo-nautilus-jnm45 update-demo-nautilus-sfcnt " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 9 09:18:27.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846' Mar 9 09:18:27.458: INFO: stderr: "" Mar 9 09:18:27.458: INFO: stdout: "update-demo-nautilus-sfcnt " Mar 9 09:18:27.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:27.529: INFO: stderr: "" Mar 9 09:18:27.529: INFO: stdout: "true" Mar 9 09:18:27.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:27.602: INFO: stderr: "" Mar 9 09:18:27.602: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 09:18:27.602: INFO: validating pod update-demo-nautilus-sfcnt Mar 9 09:18:27.612: INFO: got data: { "image": "nautilus.jpg" } Mar 9 09:18:27.612: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 09:18:27.612: INFO: update-demo-nautilus-sfcnt is verified up and running STEP: scaling up the replication controller Mar 9 09:18:27.614: INFO: scanned /root for discovery docs: Mar 9 09:18:27.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5846' Mar 9 09:18:28.750: INFO: stderr: "" Mar 9 09:18:28.750: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 09:18:28.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846' Mar 9 09:18:28.853: INFO: stderr: "" Mar 9 09:18:28.853: INFO: stdout: "update-demo-nautilus-sfcnt update-demo-nautilus-sp5xk " Mar 9 09:18:28.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:28.957: INFO: stderr: "" Mar 9 09:18:28.957: INFO: stdout: "true" Mar 9 09:18:28.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:29.034: INFO: stderr: "" Mar 9 09:18:29.034: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 09:18:29.034: INFO: validating pod update-demo-nautilus-sfcnt Mar 9 09:18:29.036: INFO: got data: { "image": "nautilus.jpg" } Mar 9 09:18:29.036: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 09:18:29.036: INFO: update-demo-nautilus-sfcnt is verified up and running Mar 9 09:18:29.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp5xk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:29.105: INFO: stderr: "" Mar 9 09:18:29.105: INFO: stdout: "" Mar 9 09:18:29.105: INFO: update-demo-nautilus-sp5xk is created but not running Mar 9 09:18:34.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846' Mar 9 09:18:34.226: INFO: stderr: "" Mar 9 09:18:34.226: INFO: stdout: "update-demo-nautilus-sfcnt update-demo-nautilus-sp5xk " Mar 9 09:18:34.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:34.327: INFO: stderr: "" Mar 9 09:18:34.327: INFO: stdout: "true" Mar 9 09:18:34.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:34.394: INFO: stderr: "" Mar 9 09:18:34.394: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 09:18:34.394: INFO: validating pod update-demo-nautilus-sfcnt Mar 9 09:18:34.397: INFO: got data: { "image": "nautilus.jpg" } Mar 9 09:18:34.397: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 09:18:34.397: INFO: update-demo-nautilus-sfcnt is verified up and running Mar 9 09:18:34.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp5xk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:34.481: INFO: stderr: "" Mar 9 09:18:34.481: INFO: stdout: "true" Mar 9 09:18:34.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp5xk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846' Mar 9 09:18:34.546: INFO: stderr: "" Mar 9 09:18:34.546: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 09:18:34.546: INFO: validating pod update-demo-nautilus-sp5xk Mar 9 09:18:34.549: INFO: got data: { "image": "nautilus.jpg" } Mar 9 09:18:34.549: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 09:18:34.549: INFO: update-demo-nautilus-sp5xk is verified up and running STEP: using delete to clean up resources Mar 9 09:18:34.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5846' Mar 9 09:18:34.646: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 09:18:34.646: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 9 09:18:34.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5846' Mar 9 09:18:34.715: INFO: stderr: "No resources found in kubectl-5846 namespace.\n" Mar 9 09:18:34.715: INFO: stdout: "" Mar 9 09:18:34.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5846 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 9 09:18:34.801: INFO: stderr: "" Mar 9 09:18:34.801: INFO: stdout: "update-demo-nautilus-sfcnt\nupdate-demo-nautilus-sp5xk\n" Mar 9 09:18:35.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5846' Mar 9 09:18:35.411: INFO: stderr: "No resources found in kubectl-5846 namespace.\n" Mar 9 09:18:35.411: INFO: stdout: "" Mar 9 09:18:35.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5846 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 9 09:18:35.488: INFO: stderr: "" Mar 9 09:18:35.488: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:18:35.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5846" for this suite. • [SLOW TEST:25.467 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":109,"skipped":1924,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:18:35.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464 STEP: creating an pod Mar 9 09:18:35.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-317 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 9 09:18:35.678: INFO: stderr: "" Mar 9 09:18:35.678: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 9 09:18:35.678: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 9 09:18:35.678: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-317" to be "running and ready, or succeeded" Mar 9 09:18:35.691: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.629228ms Mar 9 09:18:37.694: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.016158192s Mar 9 09:18:37.694: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 9 09:18:37.694: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 9 09:18:37.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317' Mar 9 09:18:37.826: INFO: stderr: "" Mar 9 09:18:37.826: INFO: stdout: "I0309 09:18:36.908412 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/9gh 556\nI0309 09:18:37.108671 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/wwxs 590\nI0309 09:18:37.308580 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/tdm 534\nI0309 09:18:37.508685 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/vrc 361\nI0309 09:18:37.708583 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/8tgb 345\n" STEP: limiting log lines Mar 9 09:18:37.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317 --tail=1' Mar 9 09:18:37.935: INFO: stderr: "" Mar 9 09:18:37.936: INFO: stdout: "I0309 09:18:37.908549 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/wq4m 503\n" Mar 9 09:18:37.936: INFO: got output "I0309 09:18:37.908549 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/wq4m 503\n" STEP: limiting log bytes Mar 9 09:18:37.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317 --limit-bytes=1' Mar 9 09:18:38.014: INFO: stderr: "" Mar 9 09:18:38.014: INFO: stdout: "I" Mar 9 09:18:38.014: INFO: got output "I" STEP: exposing timestamps Mar 9 09:18:38.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317 --tail=1 --timestamps' Mar 9 09:18:38.106: INFO: stderr: "" Mar 9 09:18:38.106: INFO: stdout: "2020-03-09T09:18:37.908692024Z I0309 09:18:37.908549 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/wq4m 503\n" Mar 9 09:18:38.106: INFO: got output "2020-03-09T09:18:37.908692024Z I0309 09:18:37.908549 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/wq4m 503\n" STEP: restricting to a time range Mar 9 09:18:40.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317 --since=1s' Mar 9 09:18:40.774: INFO: stderr: "" Mar 9 09:18:40.774: INFO: stdout: "I0309 09:18:39.908538 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/lz9k 549\nI0309 09:18:40.108630 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/z62j 506\nI0309 09:18:40.308594 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/g4bj 284\nI0309 09:18:40.508574 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/jtt 431\nI0309 09:18:40.708588 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/qnp5 357\n" Mar 9 09:18:40.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317 --since=24h' Mar 9 09:18:40.858: INFO: stderr: "" Mar 9 09:18:40.858: INFO: stdout: "I0309 09:18:36.908412 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/9gh 556\nI0309 09:18:37.108671 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/wwxs 590\nI0309 09:18:37.308580 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/tdm 534\nI0309 09:18:37.508685 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/vrc 361\nI0309 09:18:37.708583 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/8tgb 345\nI0309 09:18:37.908549 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/wq4m 503\nI0309 09:18:38.108540 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/mcw9 533\nI0309 09:18:38.308593 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/r4l 506\nI0309 09:18:38.508612 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/9gf 519\nI0309 09:18:38.708676 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/gfvl 295\nI0309 09:18:38.908602 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/95zp 426\nI0309 09:18:39.108639 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/glxh 469\nI0309 09:18:39.308649 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/tqv 500\nI0309 09:18:39.508598 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/r7q 219\nI0309 09:18:39.708659 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/clj 250\nI0309 09:18:39.908538 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/lz9k 549\nI0309 09:18:40.108630 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/z62j 506\nI0309 09:18:40.308594 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/g4bj 284\nI0309 09:18:40.508574 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/jtt 431\nI0309 09:18:40.708588 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/qnp5 357\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470 Mar 9 09:18:40.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-317' Mar 9 09:18:46.055: INFO: stderr: "" Mar 9 09:18:46.055: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:18:46.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-317" for this suite. • [SLOW TEST:10.574 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":110,"skipped":1940,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:18:46.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 9 09:18:46.171: INFO: Waiting up to 5m0s for pod "pod-195071b0-3241-468a-b265-94478ce669a9" in namespace "emptydir-8176" to be "success or failure" Mar 9 09:18:46.208: INFO: Pod "pod-195071b0-3241-468a-b265-94478ce669a9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.510659ms Mar 9 09:18:48.212: INFO: Pod "pod-195071b0-3241-468a-b265-94478ce669a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040869976s STEP: Saw pod success Mar 9 09:18:48.212: INFO: Pod "pod-195071b0-3241-468a-b265-94478ce669a9" satisfied condition "success or failure" Mar 9 09:18:48.215: INFO: Trying to get logs from node jerma-worker2 pod pod-195071b0-3241-468a-b265-94478ce669a9 container test-container: STEP: delete the pod Mar 9 09:18:48.267: INFO: Waiting for pod pod-195071b0-3241-468a-b265-94478ce669a9 to disappear Mar 9 09:18:48.277: INFO: Pod pod-195071b0-3241-468a-b265-94478ce669a9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:18:48.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8176" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1940,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:18:48.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:18:48.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 9 09:18:48.966: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:48Z generation:1 name:name1 resourceVersion:266946 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:40ac3a32-8547-47d6-b4ca-9aca855af20f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 9 09:18:58.972: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:58Z generation:1 name:name2 resourceVersion:266995 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cc9c7227-3434-45f4-b2ea-64a05c41e23e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 9 09:19:08.979: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:48Z generation:2 name:name1 resourceVersion:267025 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:40ac3a32-8547-47d6-b4ca-9aca855af20f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 9 09:19:18.986: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:58Z generation:2 name:name2 resourceVersion:267055 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cc9c7227-3434-45f4-b2ea-64a05c41e23e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 9 09:19:28.993: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:48Z generation:2 name:name1 resourceVersion:267085 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:40ac3a32-8547-47d6-b4ca-9aca855af20f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 9 09:19:39.001: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:58Z generation:2 name:name2 resourceVersion:267115 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cc9c7227-3434-45f4-b2ea-64a05c41e23e] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:19:49.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2187" for this suite. • [SLOW TEST:61.233 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":112,"skipped":1955,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:19:49.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:19:49.598: INFO: Creating deployment "webserver-deployment" Mar 9 09:19:49.619: INFO: Waiting for observed generation 1 Mar 9 09:19:51.676: INFO: Waiting for all required pods to come up Mar 9 09:19:51.694: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 9 09:19:55.704: INFO: Waiting for deployment "webserver-deployment" to complete Mar 9 09:19:55.711: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 9 09:19:55.719: INFO: Updating deployment webserver-deployment Mar 9 09:19:55.719: INFO: Waiting for observed generation 2 Mar 9 09:19:57.790: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 9 09:19:57.792: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 9 09:19:57.795: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 9 09:19:57.801: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 9 09:19:57.801: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 9 09:19:57.803: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 9 09:19:57.807: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 9 09:19:57.807: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 9 09:19:57.820: INFO: Updating deployment webserver-deployment Mar 9 09:19:57.820: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 9 09:19:57.843: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 9 09:19:57.903: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 9 09:19:58.004: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6460 /apis/apps/v1/namespaces/deployment-6460/deployments/webserver-deployment 7aaa05b7-bc27-458a-be56-84c8bfb3efea 267375 3 2020-03-09 09:19:49 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f0cc28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-09 09:19:56 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-09 09:19:57 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 9 09:19:58.082: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-6460 /apis/apps/v1/namespaces/deployment-6460/replicasets/webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 267426 3 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 7aaa05b7-bc27-458a-be56-84c8bfb3efea 0xc000a175f7 0xc000a175f8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a176f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 9 09:19:58.082: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 9 09:19:58.082: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-6460 /apis/apps/v1/namespaces/deployment-6460/replicasets/webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 267425 3 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 7aaa05b7-bc27-458a-be56-84c8bfb3efea 0xc000a174e7 0xc000a174e8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a17598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 9 09:19:58.103: INFO: Pod "webserver-deployment-595b5b9587-2cb6d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2cb6d webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-2cb6d 72d043d3-e1e5-40b8-9f58-42ee83d10048 267404 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289c047 0xc00289c048}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-2v94v" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2v94v webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-2v94v 76153c66-e3a5-47ec-9e9d-711a9ea5d038 267262 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289c200 0xc00289c201}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.32,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://912c6e030ebbc3834764f237408509f8a43375215461be8f5f8f2ab7ae8a9a79,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-4flc7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4flc7 webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-4flc7 d6fe87d5-0b8a-41c8-b7f2-760b7c006eaf 267274 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289c4a0 0xc00289c4a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.23,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://98c3e3fb99f30a3e6ab290092709d2055bf22e5020b66aca34954d8baac4265e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-59x9c" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-59x9c webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-59x9c cddb708a-55db-4dba-bc84-97c477c1c6ca 267265 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289c740 0xc00289c741}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.36,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c0812f58d022f61c1d912530640bd1270addc5c549bdecd299d39aa26f8f35bf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-5jvmr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5jvmr webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-5jvmr b04bc41d-98e6-4d96-9d47-ac0b26b8008a 267392 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289c940 0xc00289c941}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-6kmrc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6kmrc webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-6kmrc d8984441-e119-4816-b622-911584b745cd 267259 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289cb90 0xc00289cb91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.33,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bc2f62d7a9f8f6c729947c3c49b6fa0cd9183bd6850fd9fbe92fbb799fc99ff7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-8gfcx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8gfcx webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-8gfcx 4f8b6336-4f16-47d6-a060-6cd733699b46 267417 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289cd70 0xc00289cd71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-cz7vm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cz7vm webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-cz7vm 5bb3ae2b-9ca5-4040-aa80-c6de5bd57a50 267420 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289ce80 0xc00289ce81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-d4blq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d4blq webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-d4blq 3dd14a47-cc73-4ce5-9c50-cd1677da3106 267382 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289cf90 0xc00289cf91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-f4twt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f4twt webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-f4twt 00de9839-fdc5-49aa-8c24-c7a777b2b830 267419 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d0a0 0xc00289d0a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-krrxk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-krrxk webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-krrxk bbcbb83c-e49c-4453-be1a-c251279360ff 267268 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d1b0 0xc00289d1b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.35,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://265f9816ae2f2e1c22913a1a5b7b59e9903c8ce2c597bd394fda022cf8dc4d09,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-l96m7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l96m7 webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-l96m7 25ed64f8-122f-4f21-b5a0-d825ef7a60e8 267248 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d320 0xc00289d321}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.22,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1224d121f21339681d0563132496845fd8feb81ca5b4cd6502e70ad7edc7957e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-qxd6w" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qxd6w webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-qxd6w 2ebebd8b-e80b-4eea-97bc-b7e073262b2c 267416 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d5c0 0xc00289d5c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-rfzjv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rfzjv webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-rfzjv 4fc3c061-742b-4469-a878-44881f476636 267405 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d7b0 0xc00289d7b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-rmld8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rmld8 webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-rmld8 5e6aac17-5ab2-461a-9149-9e4afac8c1e9 267406 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d960 0xc00289d961}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-s4f9s" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s4f9s webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-s4f9s d479e7e8-9bdb-44a5-9756-ea46b9fe4494 267418 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289db40 0xc00289db41}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-sr667" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sr667 webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-sr667 7109f57c-d197-4791-afad-fd9eafc912ad 267440 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289dcf0 0xc00289dcf1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-09 09:19:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-w4rcm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w4rcm webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-w4rcm 23f1f8bd-e0e0-4743-bd12-83014cb6c2bd 267277 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289df90 0xc00289df91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.26,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6a474733f2dd7cfce693522ea294b2bfa4535109d74bfe35d46ec5ebbf337b67,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-x69n7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-x69n7 webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-x69n7 1cc99b4a-7a6a-497c-820b-b998457268a5 267280 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc000a1a190 0xc000a1a191}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.25,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://06f488455f731d7406d0c2b78554a34bf198e05a2917a3105159ab32c548ae9a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-zxz9f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zxz9f webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-zxz9f fa48b456-f87e-4019-b9a5-058b93a91e65 267430 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc000a1a310 0xc000a1a311}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-09 09:19:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-46jxk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-46jxk webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-46jxk 7c3eb508-1845-4d88-a118-952d91c0f6df 267431 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1a460 0xc000a1a461}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-4psfk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4psfk webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-4psfk 8cd54c9b-8f64-40c9-b650-02df4768cb7b 267423 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1a580 0xc000a1a581}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-5fgfq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5fgfq webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-5fgfq a8f6ed64-cfb9-4049-b335-1d64b2c2ba39 267422 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1a6b0 0xc000a1a6b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-7f2rv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7f2rv webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-7f2rv ff8afdd6-2861-4cdf-ac12-858edf350d7c 267421 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1a7d0 0xc000a1a7d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-9tgqc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9tgqc webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-9tgqc a1d22c9d-6a81-44b8-9bd5-72ca4c95187e 267341 0 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1ab20 0xc000a1ab21}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-09 09:19:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-b7p6q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b7p6q webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-b7p6q e5d39845-3e38-49f5-b39a-5050388c70c3 267344 0 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1acb0 0xc000a1acb1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-09 09:19:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-hxpwk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hxpwk webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-hxpwk 1b8ff28c-3599-41dc-b144-53503e5337b7 267380 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1ae40 0xc000a1ae41}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-j6w69" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j6w69 webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-j6w69 5c607f29-9274-4520-bd2d-7f334e391826 267403 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1af70 0xc000a1af71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-sgfzp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sgfzp webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-sgfzp dd1cbde6-52b8-4310-9641-928dfc6e6556 267424 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1b0c0 0xc000a1b0c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.108: INFO: Pod "webserver-deployment-c7997dcc8-tv2kh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tv2kh webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-tv2kh ccb3dafb-0b37-4f16-bcb6-b817b9292dac 267384 0 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1b1e0 0xc000a1b1e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.27,StartTime:2020-03-09 09:19:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.108: INFO: Pod "webserver-deployment-c7997dcc8-xvssz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xvssz webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-xvssz 2e56321b-24b4-4831-8f69-5c11813512f6 267342 0 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1b380 0xc000a1b381}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-09 09:19:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.108: INFO: Pod "webserver-deployment-c7997dcc8-xx6kg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xx6kg webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-xx6kg b008796e-672d-4276-95c4-8dee5da70ac5 267390 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1b4f0 0xc000a1b4f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 9 09:19:58.108: INFO: Pod "webserver-deployment-c7997dcc8-zj2z8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zj2z8 webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-zj2z8 e9936b39-87f6-4002-ae6b-bb0d9bf06276 267346 0 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1b630 0xc000a1b631}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-09 09:19:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:19:58.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6460" for this suite. • [SLOW TEST:8.737 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":113,"skipped":1970,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:19:58.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 9 09:19:58.476: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 9 09:19:58.593: INFO: Waiting for terminating namespaces to be deleted... Mar 9 09:19:58.599: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-5jvmr from deployment-6460 started at 2020-03-09 09:19:57 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-f4twt from deployment-6460 started at (0 container statuses recorded) Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-4flc7 from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: true, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-xvssz from deployment-6460 started at 2020-03-09 09:19:55 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-hxpwk from deployment-6460 started at 2020-03-09 09:19:57 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-rfzjv from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-qxd6w from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-5fgfq from deployment-6460 started at (0 container statuses recorded) Mar 9 09:19:58.850: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-w4rcm from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: true, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-x69n7 from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: true, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-tv2kh from deployment-6460 started at 2020-03-09 09:19:55 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-j6w69 from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:58.850: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-l96m7 from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: true, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-zxz9f from deployment-6460 started at 2020-03-09 09:19:57 +0000 UTC (1 container statuses recorded) Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-s4f9s from deployment-6460 started at (0 container statuses recorded) Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-sgfzp from deployment-6460 started at (0 container statuses recorded) Mar 9 09:19:58.850: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-cz7vm from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:59.148: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-2v94v from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: true, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-xx6kg from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-59x9c from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: true, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-b7p6q from deployment-6460 started at 2020-03-09 09:19:55 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-2cb6d from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-8gfcx from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-7f2rv from deployment-6460 started at (0 container statuses recorded) Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-9tgqc from deployment-6460 started at 2020-03-09 09:19:55 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-d4blq from deployment-6460 started at 2020-03-09 09:19:57 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-46jxk from deployment-6460 started at (0 container statuses recorded) Mar 9 09:19:59.148: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-6kmrc from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: true, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-rmld8 from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-4psfk from deployment-6460 started at (0 container statuses recorded) Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-krrxk from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: true, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-zj2z8 from deployment-6460 started at 2020-03-09 09:19:56 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0 Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-sr667 from deployment-6460 started at 2020-03-09 09:19:57 +0000 UTC (1 container statuses recorded) Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-2cb6d requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-2v94v requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-4flc7 requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-59x9c requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-5jvmr requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-6kmrc requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-8gfcx requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-cz7vm requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-d4blq requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-f4twt requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-krrxk requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-l96m7 requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-qxd6w requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-rfzjv requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-rmld8 requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-s4f9s requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-sr667 requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-w4rcm requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-x69n7 requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-zxz9f requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.293: INFO: Pod webserver-deployment-c7997dcc8-46jxk requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-4psfk requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-5fgfq requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-7f2rv requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-9tgqc requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-b7p6q requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-hxpwk requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-j6w69 requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-sgfzp requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-tv2kh requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-xvssz requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-xx6kg requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-zj2z8 requesting resource cpu=0m on Node jerma-worker2 Mar 9 09:19:59.294: INFO: Pod kindnet-gxwrl requesting resource cpu=100m on Node jerma-worker Mar 9 09:19:59.294: INFO: Pod kindnet-x9bds requesting resource cpu=100m on Node jerma-worker2 Mar 9 09:19:59.294: INFO: Pod kube-proxy-dvgp7 requesting resource cpu=0m on Node jerma-worker Mar 9 09:19:59.294: INFO: Pod kube-proxy-xqsww requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 9 09:19:59.294: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Mar 9 09:19:59.298: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-20f200ef-5ef9-4028-b994-19989823391a.15fa983b47df5725], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4876/filler-pod-20f200ef-5ef9-4028-b994-19989823391a to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-20f200ef-5ef9-4028-b994-19989823391a.15fa983bde012a4e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-20f200ef-5ef9-4028-b994-19989823391a.15fa983bfbf1bc1f], Reason = [Created], Message = [Created container filler-pod-20f200ef-5ef9-4028-b994-19989823391a] STEP: Considering event: Type = [Normal], Name = [filler-pod-20f200ef-5ef9-4028-b994-19989823391a.15fa983c0a00eee5], Reason = [Started], Message = [Started container filler-pod-20f200ef-5ef9-4028-b994-19989823391a] STEP: Considering event: Type = [Normal], Name = [filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791.15fa983b458fbac3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4876/filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791.15fa983be8f5ef53], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791.15fa983c03009543], Reason = [Created], Message = [Created container filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791] STEP: Considering event: Type = [Normal], Name = [filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791.15fa983c10ce6b12], Reason = [Started], Message = [Started container filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fa983cb1a5f51a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fa983cb66d6660], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:20:06.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4876" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.454 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":114,"skipped":1974,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:20:06.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:20:06.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7677" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":115,"skipped":2008,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:20:06.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:20:07.062: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1" in namespace "security-context-test-3349" to be "success or failure" Mar 9 09:20:07.147: INFO: Pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1": Phase="Pending", Reason="", readiness=false. Elapsed: 84.277682ms Mar 9 09:20:09.161: INFO: Pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098540944s Mar 9 09:20:11.170: INFO: Pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10795929s Mar 9 09:20:11.170: INFO: Pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1" satisfied condition "success or failure" Mar 9 09:20:11.177: INFO: Got logs for pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:20:11.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3349" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":2011,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:20:11.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9753 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9753 STEP: creating replication controller externalsvc in namespace services-9753 I0309 09:20:11.409673 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9753, replica count: 2 I0309 09:20:14.460061 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 9 09:20:14.509: INFO: Creating new exec pod Mar 9 09:20:16.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9753 execpodxb64w -- /bin/sh -x -c nslookup clusterip-service' Mar 9 09:20:16.712: INFO: stderr: "I0309 09:20:16.632058 1402 log.go:172] (0xc000ab5c30) (0xc00095caa0) Create stream\nI0309 09:20:16.632091 1402 log.go:172] (0xc000ab5c30) (0xc00095caa0) Stream added, broadcasting: 1\nI0309 09:20:16.636589 1402 log.go:172] (0xc000ab5c30) Reply frame received for 1\nI0309 09:20:16.636642 1402 log.go:172] (0xc000ab5c30) (0xc0006ba780) Create stream\nI0309 09:20:16.636660 1402 log.go:172] (0xc000ab5c30) (0xc0006ba780) Stream added, broadcasting: 3\nI0309 09:20:16.637454 1402 log.go:172] (0xc000ab5c30) Reply frame received for 3\nI0309 09:20:16.637479 1402 log.go:172] (0xc000ab5c30) (0xc000529540) Create stream\nI0309 09:20:16.637490 1402 log.go:172] (0xc000ab5c30) (0xc000529540) Stream added, broadcasting: 5\nI0309 09:20:16.638261 1402 log.go:172] (0xc000ab5c30) Reply frame received for 5\nI0309 09:20:16.700576 1402 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0309 09:20:16.700599 1402 log.go:172] (0xc000529540) (5) Data frame handling\nI0309 09:20:16.700612 1402 log.go:172] (0xc000529540) (5) Data frame sent\n+ nslookup clusterip-service\nI0309 09:20:16.705186 1402 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0309 09:20:16.705201 1402 log.go:172] (0xc0006ba780) (3) Data frame handling\nI0309 09:20:16.705213 1402 log.go:172] (0xc0006ba780) (3) Data frame sent\nI0309 09:20:16.706456 1402 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0309 09:20:16.706483 1402 log.go:172] (0xc0006ba780) (3) Data frame handling\nI0309 09:20:16.706498 1402 log.go:172] (0xc0006ba780) (3) Data frame sent\nI0309 09:20:16.706570 1402 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0309 09:20:16.706596 1402 log.go:172] (0xc000529540) (5) Data frame handling\nI0309 09:20:16.706712 1402 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0309 09:20:16.706730 1402 log.go:172] (0xc0006ba780) (3) Data frame handling\nI0309 09:20:16.708714 1402 log.go:172] (0xc000ab5c30) Data frame received for 1\nI0309 09:20:16.708737 1402 log.go:172] (0xc00095caa0) (1) Data frame handling\nI0309 09:20:16.708745 1402 log.go:172] (0xc00095caa0) (1) Data frame sent\nI0309 09:20:16.708755 1402 log.go:172] (0xc000ab5c30) (0xc00095caa0) Stream removed, broadcasting: 1\nI0309 09:20:16.708772 1402 log.go:172] (0xc000ab5c30) Go away received\nI0309 09:20:16.709010 1402 log.go:172] (0xc000ab5c30) (0xc00095caa0) Stream removed, broadcasting: 1\nI0309 09:20:16.709029 1402 log.go:172] (0xc000ab5c30) (0xc0006ba780) Stream removed, broadcasting: 3\nI0309 09:20:16.709038 1402 log.go:172] (0xc000ab5c30) (0xc000529540) Stream removed, broadcasting: 5\n" Mar 9 09:20:16.712: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9753.svc.cluster.local\tcanonical name = externalsvc.services-9753.svc.cluster.local.\nName:\texternalsvc.services-9753.svc.cluster.local\nAddress: 10.109.106.175\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9753, will wait for the garbage collector to delete the pods Mar 9 09:20:16.803: INFO: Deleting ReplicationController externalsvc took: 13.678505ms Mar 9 09:20:17.103: INFO: Terminating ReplicationController externalsvc pods took: 300.26883ms Mar 9 09:20:26.136: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:20:26.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9753" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.997 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":117,"skipped":2023,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:20:26.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:20:26.258: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:20:27.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6819" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":118,"skipped":2041,"failed":0} ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:20:27.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-9297 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9297 to expose endpoints map[] Mar 9 09:20:27.576: INFO: Get endpoints failed (3.019744ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 9 09:20:28.579: INFO: successfully validated that service endpoint-test2 in namespace services-9297 exposes endpoints map[] (1.006179649s elapsed) STEP: Creating pod pod1 in namespace services-9297 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9297 to expose endpoints map[pod1:[80]] Mar 9 09:20:30.658: INFO: successfully validated that service endpoint-test2 in namespace services-9297 exposes endpoints map[pod1:[80]] (2.073513625s elapsed) STEP: Creating pod pod2 in namespace services-9297 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9297 to expose endpoints map[pod1:[80] pod2:[80]] Mar 9 09:20:32.763: INFO: successfully validated that service endpoint-test2 in namespace services-9297 exposes endpoints map[pod1:[80] pod2:[80]] (2.101267451s elapsed) STEP: Deleting pod pod1 in namespace services-9297 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9297 to expose endpoints map[pod2:[80]] Mar 9 09:20:32.831: INFO: successfully validated that service endpoint-test2 in namespace services-9297 exposes endpoints map[pod2:[80]] (53.409585ms elapsed) STEP: Deleting pod pod2 in namespace services-9297 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9297 to expose endpoints map[] Mar 9 09:20:33.840: INFO: successfully validated that service endpoint-test2 in namespace services-9297 exposes endpoints map[] (1.005834862s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:20:33.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9297" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:6.400 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":119,"skipped":2041,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:20:33.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:20:33.982: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 9 09:20:36.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2233 create -f -' Mar 9 09:20:38.778: INFO: stderr: "" Mar 9 09:20:38.778: INFO: stdout: "e2e-test-crd-publish-openapi-7890-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 9 09:20:38.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2233 delete e2e-test-crd-publish-openapi-7890-crds test-cr' Mar 9 09:20:38.894: INFO: stderr: "" Mar 9 09:20:38.894: INFO: stdout: "e2e-test-crd-publish-openapi-7890-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 9 09:20:38.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2233 apply -f -' Mar 9 09:20:39.209: INFO: stderr: "" Mar 9 09:20:39.209: INFO: stdout: "e2e-test-crd-publish-openapi-7890-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 9 09:20:39.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2233 delete e2e-test-crd-publish-openapi-7890-crds test-cr' Mar 9 09:20:39.330: INFO: stderr: "" Mar 9 09:20:39.331: INFO: stdout: "e2e-test-crd-publish-openapi-7890-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 9 09:20:39.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7890-crds' Mar 9 09:20:39.561: INFO: stderr: "" Mar 9 09:20:39.561: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7890-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:20:42.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2233" for this suite. • [SLOW TEST:8.439 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":120,"skipped":2075,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:20:42.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 9 09:20:46.489: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 9 09:20:46.499: INFO: Pod pod-with-poststart-http-hook still exists Mar 9 09:20:48.499: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 9 09:20:48.502: INFO: Pod pod-with-poststart-http-hook still exists Mar 9 09:20:50.499: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 9 09:20:50.502: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:20:50.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6273" for this suite. • [SLOW TEST:8.167 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2141,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:20:50.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-570dfa07-2a2d-471a-b049-498ca4d41dd2 STEP: Creating a pod to test consume configMaps Mar 9 09:20:50.592: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e096032f-70dd-43db-b059-134f6ad10f4c" in namespace "projected-7444" to be "success or failure" Mar 9 09:20:50.600: INFO: Pod "pod-projected-configmaps-e096032f-70dd-43db-b059-134f6ad10f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.964122ms Mar 9 09:20:52.604: INFO: Pod "pod-projected-configmaps-e096032f-70dd-43db-b059-134f6ad10f4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011990056s STEP: Saw pod success Mar 9 09:20:52.605: INFO: Pod "pod-projected-configmaps-e096032f-70dd-43db-b059-134f6ad10f4c" satisfied condition "success or failure" Mar 9 09:20:52.608: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e096032f-70dd-43db-b059-134f6ad10f4c container projected-configmap-volume-test: STEP: delete the pod Mar 9 09:20:52.645: INFO: Waiting for pod pod-projected-configmaps-e096032f-70dd-43db-b059-134f6ad10f4c to disappear Mar 9 09:20:52.650: INFO: Pod pod-projected-configmaps-e096032f-70dd-43db-b059-134f6ad10f4c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:20:52.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7444" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2145,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:20:52.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-a15d0bb6-4ae2-4362-8e62-cd798f45a3ed STEP: Creating a pod to test consume configMaps Mar 9 09:20:52.736: INFO: Waiting up to 5m0s for pod "pod-configmaps-26307397-4d3b-4980-b5d7-8b26283706ac" in namespace "configmap-7067" to be "success or failure" Mar 9 09:20:52.740: INFO: Pod "pod-configmaps-26307397-4d3b-4980-b5d7-8b26283706ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106749ms Mar 9 09:20:54.744: INFO: Pod "pod-configmaps-26307397-4d3b-4980-b5d7-8b26283706ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008080476s STEP: Saw pod success Mar 9 09:20:54.745: INFO: Pod "pod-configmaps-26307397-4d3b-4980-b5d7-8b26283706ac" satisfied condition "success or failure" Mar 9 09:20:54.747: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-26307397-4d3b-4980-b5d7-8b26283706ac container configmap-volume-test: STEP: delete the pod Mar 9 09:20:54.797: INFO: Waiting for pod pod-configmaps-26307397-4d3b-4980-b5d7-8b26283706ac to disappear Mar 9 09:20:54.812: INFO: Pod pod-configmaps-26307397-4d3b-4980-b5d7-8b26283706ac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:20:54.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7067" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:20:54.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-84f43863-640f-4aaa-a726-afc669f5f860 STEP: Creating a pod to test consume configMaps Mar 9 09:20:54.923: INFO: Waiting up to 5m0s for pod "pod-configmaps-98c025bb-1653-444f-9ba1-5d0aa8eb393f" in namespace "configmap-2499" to be "success or failure" Mar 9 09:20:54.982: INFO: Pod "pod-configmaps-98c025bb-1653-444f-9ba1-5d0aa8eb393f": Phase="Pending", Reason="", readiness=false. Elapsed: 58.530388ms Mar 9 09:20:56.986: INFO: Pod "pod-configmaps-98c025bb-1653-444f-9ba1-5d0aa8eb393f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062482496s STEP: Saw pod success Mar 9 09:20:56.986: INFO: Pod "pod-configmaps-98c025bb-1653-444f-9ba1-5d0aa8eb393f" satisfied condition "success or failure" Mar 9 09:20:56.989: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-98c025bb-1653-444f-9ba1-5d0aa8eb393f container configmap-volume-test: STEP: delete the pod Mar 9 09:20:57.051: INFO: Waiting for pod pod-configmaps-98c025bb-1653-444f-9ba1-5d0aa8eb393f to disappear Mar 9 09:20:57.056: INFO: Pod pod-configmaps-98c025bb-1653-444f-9ba1-5d0aa8eb393f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:20:57.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2499" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2203,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:20:57.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 9 09:20:57.134: INFO: Waiting up to 5m0s for pod "pod-bf35691f-c249-4453-ac6a-135ae4eb39a3" in namespace "emptydir-5663" to be "success or failure" Mar 9 09:20:57.173: INFO: Pod "pod-bf35691f-c249-4453-ac6a-135ae4eb39a3": Phase="Pending", Reason="", readiness=false. Elapsed: 38.86316ms Mar 9 09:20:59.177: INFO: Pod "pod-bf35691f-c249-4453-ac6a-135ae4eb39a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042784889s Mar 9 09:21:01.181: INFO: Pod "pod-bf35691f-c249-4453-ac6a-135ae4eb39a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046832918s STEP: Saw pod success Mar 9 09:21:01.181: INFO: Pod "pod-bf35691f-c249-4453-ac6a-135ae4eb39a3" satisfied condition "success or failure" Mar 9 09:21:01.185: INFO: Trying to get logs from node jerma-worker2 pod pod-bf35691f-c249-4453-ac6a-135ae4eb39a3 container test-container: STEP: delete the pod Mar 9 09:21:01.241: INFO: Waiting for pod pod-bf35691f-c249-4453-ac6a-135ae4eb39a3 to disappear Mar 9 09:21:01.248: INFO: Pod pod-bf35691f-c249-4453-ac6a-135ae4eb39a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:21:01.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5663" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2225,"failed":0} ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:21:01.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-5396/secret-test-255a0c82-7f1e-4adc-88d4-e14fd675a24d STEP: Creating a pod to test consume secrets Mar 9 09:21:01.309: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d648603-aacd-4587-b96d-3b8b1f0cb516" in namespace "secrets-5396" to be "success or failure" Mar 9 09:21:01.314: INFO: Pod "pod-configmaps-2d648603-aacd-4587-b96d-3b8b1f0cb516": Phase="Pending", Reason="", readiness=false. Elapsed: 4.594537ms Mar 9 09:21:03.317: INFO: Pod "pod-configmaps-2d648603-aacd-4587-b96d-3b8b1f0cb516": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007849371s Mar 9 09:21:05.321: INFO: Pod "pod-configmaps-2d648603-aacd-4587-b96d-3b8b1f0cb516": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011640681s STEP: Saw pod success Mar 9 09:21:05.321: INFO: Pod "pod-configmaps-2d648603-aacd-4587-b96d-3b8b1f0cb516" satisfied condition "success or failure" Mar 9 09:21:05.323: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-2d648603-aacd-4587-b96d-3b8b1f0cb516 container env-test: STEP: delete the pod Mar 9 09:21:05.372: INFO: Waiting for pod pod-configmaps-2d648603-aacd-4587-b96d-3b8b1f0cb516 to disappear Mar 9 09:21:05.380: INFO: Pod pod-configmaps-2d648603-aacd-4587-b96d-3b8b1f0cb516 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:21:05.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5396" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2225,"failed":0} ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:21:05.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 9 09:21:08.014: INFO: Successfully updated pod "pod-update-activedeadlineseconds-9bd93f67-1404-4857-8e6b-3d38e5b2fcd4" Mar 9 09:21:08.014: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-9bd93f67-1404-4857-8e6b-3d38e5b2fcd4" in namespace "pods-690" to be "terminated due to deadline exceeded" Mar 9 09:21:08.089: INFO: Pod "pod-update-activedeadlineseconds-9bd93f67-1404-4857-8e6b-3d38e5b2fcd4": Phase="Running", Reason="", readiness=true. Elapsed: 74.855166ms Mar 9 09:21:10.093: INFO: Pod "pod-update-activedeadlineseconds-9bd93f67-1404-4857-8e6b-3d38e5b2fcd4": Phase="Running", Reason="", readiness=true. Elapsed: 2.07860701s Mar 9 09:21:12.097: INFO: Pod "pod-update-activedeadlineseconds-9bd93f67-1404-4857-8e6b-3d38e5b2fcd4": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.082172975s Mar 9 09:21:12.097: INFO: Pod "pod-update-activedeadlineseconds-9bd93f67-1404-4857-8e6b-3d38e5b2fcd4" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:21:12.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-690" for this suite. • [SLOW TEST:6.710 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2225,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:21:12.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-tkfg STEP: Creating a pod to test atomic-volume-subpath Mar 9 09:21:12.336: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-tkfg" in namespace "subpath-8735" to be "success or failure" Mar 9 09:21:12.346: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Pending", Reason="", readiness=false. Elapsed: 9.795568ms Mar 9 09:21:14.350: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013275293s Mar 9 09:21:16.352: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Running", Reason="", readiness=true. Elapsed: 4.015673828s Mar 9 09:21:18.356: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Running", Reason="", readiness=true. Elapsed: 6.019553979s Mar 9 09:21:20.360: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Running", Reason="", readiness=true. Elapsed: 8.023219274s Mar 9 09:21:22.363: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Running", Reason="", readiness=true. Elapsed: 10.02694776s Mar 9 09:21:24.367: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Running", Reason="", readiness=true. Elapsed: 12.030711988s Mar 9 09:21:26.461: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Running", Reason="", readiness=true. Elapsed: 14.124944795s Mar 9 09:21:28.465: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Running", Reason="", readiness=true. Elapsed: 16.128768176s Mar 9 09:21:30.469: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Running", Reason="", readiness=true. Elapsed: 18.13261803s Mar 9 09:21:32.472: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Running", Reason="", readiness=true. Elapsed: 20.135372491s Mar 9 09:21:34.476: INFO: Pod "pod-subpath-test-downwardapi-tkfg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.139667251s STEP: Saw pod success Mar 9 09:21:34.476: INFO: Pod "pod-subpath-test-downwardapi-tkfg" satisfied condition "success or failure" Mar 9 09:21:34.480: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-tkfg container test-container-subpath-downwardapi-tkfg: STEP: delete the pod Mar 9 09:21:34.540: INFO: Waiting for pod pod-subpath-test-downwardapi-tkfg to disappear Mar 9 09:21:34.548: INFO: Pod pod-subpath-test-downwardapi-tkfg no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-tkfg Mar 9 09:21:34.548: INFO: Deleting pod "pod-subpath-test-downwardapi-tkfg" in namespace "subpath-8735" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:21:34.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8735" for this suite. • [SLOW TEST:22.455 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":128,"skipped":2232,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:21:34.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-7292 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7292 to expose endpoints map[] Mar 9 09:21:34.706: INFO: successfully validated that service multi-endpoint-test in namespace services-7292 exposes endpoints map[] (20.11903ms elapsed) STEP: Creating pod pod1 in namespace services-7292 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7292 to expose endpoints map[pod1:[100]] Mar 9 09:21:36.760: INFO: successfully validated that service multi-endpoint-test in namespace services-7292 exposes endpoints map[pod1:[100]] (2.047438533s elapsed) STEP: Creating pod pod2 in namespace services-7292 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7292 to expose endpoints map[pod1:[100] pod2:[101]] Mar 9 09:21:38.832: INFO: successfully validated that service multi-endpoint-test in namespace services-7292 exposes endpoints map[pod1:[100] pod2:[101]] (2.067778111s elapsed) STEP: Deleting pod pod1 in namespace services-7292 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7292 to expose endpoints map[pod2:[101]] Mar 9 09:21:39.868: INFO: successfully validated that service multi-endpoint-test in namespace services-7292 exposes endpoints map[pod2:[101]] (1.030623674s elapsed) STEP: Deleting pod pod2 in namespace services-7292 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7292 to expose endpoints map[] Mar 9 09:21:40.879: INFO: successfully validated that service multi-endpoint-test in namespace services-7292 exposes endpoints map[] (1.00619639s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:21:40.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7292" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:6.382 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":129,"skipped":2242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:21:40.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 9 09:21:43.034: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:21:43.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5419" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2274,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:21:43.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1692 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 9 09:21:43.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4943' Mar 9 09:21:43.278: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 9 09:21:43.278: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Mar 9 09:21:43.298: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Mar 9 09:21:43.303: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 9 09:21:43.330: INFO: scanned /root for discovery docs: Mar 9 09:21:43.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4943' Mar 9 09:21:59.232: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 9 09:21:59.233: INFO: stdout: "Created e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e\nScaling up e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 9 09:21:59.233: INFO: stdout: "Created e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e\nScaling up e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 9 09:21:59.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4943' Mar 9 09:21:59.337: INFO: stderr: "" Mar 9 09:21:59.337: INFO: stdout: "e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e-rlv74 " Mar 9 09:21:59.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e-rlv74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4943' Mar 9 09:21:59.417: INFO: stderr: "" Mar 9 09:21:59.417: INFO: stdout: "true" Mar 9 09:21:59.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e-rlv74 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4943' Mar 9 09:21:59.506: INFO: stderr: "" Mar 9 09:21:59.506: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 9 09:21:59.506: INFO: e2e-test-httpd-rc-4e43c30ca683510eb006341204b8646e-rlv74 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1698 Mar 9 09:21:59.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4943' Mar 9 09:21:59.589: INFO: stderr: "" Mar 9 09:21:59.589: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:21:59.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4943" for this suite. • [SLOW TEST:16.513 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1687 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":131,"skipped":2283,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:21:59.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-d5f8cead-97f2-496d-80d3-8d513cf05b80 STEP: Creating a pod to test consume secrets Mar 9 09:21:59.667: INFO: Waiting up to 5m0s for pod "pod-secrets-239eedee-2a3c-4ab8-a892-d21c17ccfa51" in namespace "secrets-4150" to be "success or failure" Mar 9 09:21:59.671: INFO: Pod "pod-secrets-239eedee-2a3c-4ab8-a892-d21c17ccfa51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054231ms Mar 9 09:22:01.674: INFO: Pod "pod-secrets-239eedee-2a3c-4ab8-a892-d21c17ccfa51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007546684s STEP: Saw pod success Mar 9 09:22:01.674: INFO: Pod "pod-secrets-239eedee-2a3c-4ab8-a892-d21c17ccfa51" satisfied condition "success or failure" Mar 9 09:22:01.676: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-239eedee-2a3c-4ab8-a892-d21c17ccfa51 container secret-volume-test: STEP: delete the pod Mar 9 09:22:01.697: INFO: Waiting for pod pod-secrets-239eedee-2a3c-4ab8-a892-d21c17ccfa51 to disappear Mar 9 09:22:01.701: INFO: Pod pod-secrets-239eedee-2a3c-4ab8-a892-d21c17ccfa51 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:22:01.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4150" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2283,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:22:01.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0309 09:22:32.326275 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 9 09:22:32.326: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:22:32.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7178" for this suite. • [SLOW TEST:30.624 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":133,"skipped":2318,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:22:32.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 9 09:22:32.439: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:22:32.457: INFO: Number of nodes with available pods: 0 Mar 9 09:22:32.458: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:22:33.504: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:22:33.506: INFO: Number of nodes with available pods: 0 Mar 9 09:22:33.506: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:22:34.462: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:22:34.464: INFO: Number of nodes with available pods: 0 Mar 9 09:22:34.464: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:22:35.462: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:22:35.465: INFO: Number of nodes with available pods: 2 Mar 9 09:22:35.465: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 9 09:22:35.502: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:22:35.505: INFO: Number of nodes with available pods: 1 Mar 9 09:22:35.505: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:22:36.510: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:22:36.513: INFO: Number of nodes with available pods: 1 Mar 9 09:22:36.513: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:22:37.516: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:22:37.526: INFO: Number of nodes with available pods: 1 Mar 9 09:22:37.526: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:22:38.511: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:22:38.514: INFO: Number of nodes with available pods: 1 Mar 9 09:22:38.514: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:22:39.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:22:39.512: INFO: Number of nodes with available pods: 1 Mar 9 09:22:39.512: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:22:40.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:22:40.512: INFO: Number of nodes with available pods: 1 Mar 9 09:22:40.512: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:22:41.510: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:22:41.514: INFO: Number of nodes with available pods: 2 Mar 9 09:22:41.514: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-836, will wait for the garbage collector to delete the pods Mar 9 09:22:41.574: INFO: Deleting DaemonSet.extensions daemon-set took: 4.991025ms Mar 9 09:22:41.674: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.231039ms Mar 9 09:22:46.077: INFO: Number of nodes with available pods: 0 Mar 9 09:22:46.077: INFO: Number of running nodes: 0, number of available pods: 0 Mar 9 09:22:46.080: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-836/daemonsets","resourceVersion":"269047"},"items":null} Mar 9 09:22:46.083: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-836/pods","resourceVersion":"269047"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:22:46.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-836" for this suite. • [SLOW TEST:13.765 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":134,"skipped":2330,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:22:46.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-5620896a-3aae-48e7-b71f-aba80aec3eec STEP: Creating a pod to test consume secrets Mar 9 09:22:46.153: INFO: Waiting up to 5m0s for pod "pod-secrets-b2f34d1e-40aa-404d-a6c8-9bedb8f8e3d5" in namespace "secrets-3673" to be "success or failure" Mar 9 09:22:46.185: INFO: Pod "pod-secrets-b2f34d1e-40aa-404d-a6c8-9bedb8f8e3d5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.73299ms Mar 9 09:22:48.189: INFO: Pod "pod-secrets-b2f34d1e-40aa-404d-a6c8-9bedb8f8e3d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.036542264s STEP: Saw pod success Mar 9 09:22:48.189: INFO: Pod "pod-secrets-b2f34d1e-40aa-404d-a6c8-9bedb8f8e3d5" satisfied condition "success or failure" Mar 9 09:22:48.192: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b2f34d1e-40aa-404d-a6c8-9bedb8f8e3d5 container secret-volume-test: STEP: delete the pod Mar 9 09:22:48.226: INFO: Waiting for pod pod-secrets-b2f34d1e-40aa-404d-a6c8-9bedb8f8e3d5 to disappear Mar 9 09:22:48.251: INFO: Pod pod-secrets-b2f34d1e-40aa-404d-a6c8-9bedb8f8e3d5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:22:48.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3673" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2342,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:22:48.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e4706465-08a9-4d8d-93ec-a625ab20d190 STEP: Creating a pod to test consume secrets Mar 9 09:22:48.328: INFO: Waiting up to 5m0s for pod "pod-secrets-08472406-67d6-4eba-aee3-eb8790b0c599" in namespace "secrets-7298" to be "success or failure" Mar 9 09:22:48.338: INFO: Pod "pod-secrets-08472406-67d6-4eba-aee3-eb8790b0c599": Phase="Pending", Reason="", readiness=false. Elapsed: 9.520105ms Mar 9 09:22:50.341: INFO: Pod "pod-secrets-08472406-67d6-4eba-aee3-eb8790b0c599": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012829631s STEP: Saw pod success Mar 9 09:22:50.341: INFO: Pod "pod-secrets-08472406-67d6-4eba-aee3-eb8790b0c599" satisfied condition "success or failure" Mar 9 09:22:50.343: INFO: Trying to get logs from node jerma-worker pod pod-secrets-08472406-67d6-4eba-aee3-eb8790b0c599 container secret-volume-test: STEP: delete the pod Mar 9 09:22:50.363: INFO: Waiting for pod pod-secrets-08472406-67d6-4eba-aee3-eb8790b0c599 to disappear Mar 9 09:22:50.367: INFO: Pod pod-secrets-08472406-67d6-4eba-aee3-eb8790b0c599 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:22:50.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7298" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2345,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:22:50.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1632 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 9 09:22:50.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7482' Mar 9 09:22:50.554: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 9 09:22:50.554: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 9 09:22:50.608: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-4l2kx] Mar 9 09:22:50.608: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-4l2kx" in namespace "kubectl-7482" to be "running and ready" Mar 9 09:22:50.613: INFO: Pod "e2e-test-httpd-rc-4l2kx": Phase="Pending", Reason="", readiness=false. Elapsed: 5.493228ms Mar 9 09:22:52.618: INFO: Pod "e2e-test-httpd-rc-4l2kx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009992945s Mar 9 09:22:54.622: INFO: Pod "e2e-test-httpd-rc-4l2kx": Phase="Running", Reason="", readiness=true. Elapsed: 4.013969285s Mar 9 09:22:54.622: INFO: Pod "e2e-test-httpd-rc-4l2kx" satisfied condition "running and ready" Mar 9 09:22:54.622: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-4l2kx] Mar 9 09:22:54.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-7482' Mar 9 09:22:54.774: INFO: stderr: "" Mar 9 09:22:54.774: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.56. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.56. Set the 'ServerName' directive globally to suppress this message\n[Mon Mar 09 09:22:52.180740 2020] [mpm_event:notice] [pid 1:tid 140619498367848] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Mar 09 09:22:52.180797 2020] [core:notice] [pid 1:tid 140619498367848] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1637 Mar 9 09:22:54.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7482' Mar 9 09:22:54.883: INFO: stderr: "" Mar 9 09:22:54.883: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:22:54.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7482" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":137,"skipped":2349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:22:54.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 9 09:22:55.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3967 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 9 09:22:56.739: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0309 09:22:56.616063 1719 log.go:172] (0xc00095e0b0) (0xc0005d5a40) Create stream\nI0309 09:22:56.616119 1719 log.go:172] (0xc00095e0b0) (0xc0005d5a40) Stream added, broadcasting: 1\nI0309 09:22:56.618471 1719 log.go:172] (0xc00095e0b0) Reply frame received for 1\nI0309 09:22:56.618515 1719 log.go:172] (0xc00095e0b0) (0xc0007ec000) Create stream\nI0309 09:22:56.618529 1719 log.go:172] (0xc00095e0b0) (0xc0007ec000) Stream added, broadcasting: 3\nI0309 09:22:56.619444 1719 log.go:172] (0xc00095e0b0) Reply frame received for 3\nI0309 09:22:56.619484 1719 log.go:172] (0xc00095e0b0) (0xc0005d5ae0) Create stream\nI0309 09:22:56.619495 1719 log.go:172] (0xc00095e0b0) (0xc0005d5ae0) Stream added, broadcasting: 5\nI0309 09:22:56.620317 1719 log.go:172] (0xc00095e0b0) Reply frame received for 5\nI0309 09:22:56.620348 1719 log.go:172] (0xc00095e0b0) (0xc0007ec0a0) Create stream\nI0309 09:22:56.620357 1719 log.go:172] (0xc00095e0b0) (0xc0007ec0a0) Stream added, broadcasting: 7\nI0309 09:22:56.621199 1719 log.go:172] (0xc00095e0b0) Reply frame received for 7\nI0309 09:22:56.621355 1719 log.go:172] (0xc0007ec000) (3) Writing data frame\nI0309 09:22:56.621462 1719 log.go:172] (0xc0007ec000) (3) Writing data frame\nI0309 09:22:56.622556 1719 log.go:172] (0xc00095e0b0) Data frame received for 5\nI0309 09:22:56.622574 1719 log.go:172] (0xc0005d5ae0) (5) Data frame handling\nI0309 09:22:56.622585 1719 log.go:172] (0xc0005d5ae0) (5) Data frame sent\nI0309 09:22:56.623097 1719 log.go:172] (0xc00095e0b0) Data frame received for 5\nI0309 09:22:56.623113 1719 log.go:172] (0xc0005d5ae0) (5) Data frame handling\nI0309 09:22:56.623127 1719 log.go:172] (0xc0005d5ae0) (5) Data frame sent\nI0309 09:22:56.639577 1719 log.go:172] (0xc00095e0b0) Data frame received for 5\nI0309 09:22:56.639656 1719 log.go:172] (0xc0005d5ae0) (5) Data frame handling\nI0309 09:22:56.639685 1719 log.go:172] (0xc00095e0b0) Data frame received for 7\nI0309 09:22:56.639700 1719 log.go:172] (0xc0007ec0a0) (7) Data frame handling\nI0309 09:22:56.640485 1719 log.go:172] (0xc00095e0b0) Data frame received for 1\nI0309 09:22:56.640557 1719 log.go:172] (0xc0005d5a40) (1) Data frame handling\nI0309 09:22:56.640587 1719 log.go:172] (0xc0005d5a40) (1) Data frame sent\nI0309 09:22:56.640611 1719 log.go:172] (0xc00095e0b0) (0xc0005d5a40) Stream removed, broadcasting: 1\nI0309 09:22:56.640755 1719 log.go:172] (0xc00095e0b0) (0xc0007ec000) Stream removed, broadcasting: 3\nI0309 09:22:56.640862 1719 log.go:172] (0xc00095e0b0) Go away received\nI0309 09:22:56.640934 1719 log.go:172] (0xc00095e0b0) (0xc0005d5a40) Stream removed, broadcasting: 1\nI0309 09:22:56.640956 1719 log.go:172] (0xc00095e0b0) (0xc0007ec000) Stream removed, broadcasting: 3\nI0309 09:22:56.640972 1719 log.go:172] (0xc00095e0b0) (0xc0005d5ae0) Stream removed, broadcasting: 5\nI0309 09:22:56.640987 1719 log.go:172] (0xc00095e0b0) (0xc0007ec0a0) Stream removed, broadcasting: 7\n" Mar 9 09:22:56.739: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:22:58.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3967" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":138,"skipped":2401,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:22:58.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:22:58.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1851' Mar 9 09:22:59.191: INFO: stderr: "" Mar 9 09:22:59.191: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 9 09:22:59.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1851' Mar 9 09:22:59.541: INFO: stderr: "" Mar 9 09:22:59.541: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 9 09:23:00.751: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:23:00.751: INFO: Found 0 / 1 Mar 9 09:23:01.995: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:23:01.996: INFO: Found 0 / 1 Mar 9 09:23:02.545: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:23:02.545: INFO: Found 0 / 1 Mar 9 09:23:03.545: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:23:03.545: INFO: Found 1 / 1 Mar 9 09:23:03.545: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 9 09:23:03.547: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:23:03.547: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 9 09:23:03.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-55h84 --namespace=kubectl-1851' Mar 9 09:23:03.673: INFO: stderr: "" Mar 9 09:23:03.673: INFO: stdout: "Name: agnhost-master-55h84\nNamespace: kubectl-1851\nPriority: 0\nNode: jerma-worker2/172.17.0.5\nStart Time: Mon, 09 Mar 2020 09:22:59 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.64\nIPs:\n IP: 10.244.1.64\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://2e96dc05e70108a8e04b33bf3a4925b875bc8c2cc4b68b38c770896422c4fd02\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 09 Mar 2020 09:23:02 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-hhf8r (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-hhf8r:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-hhf8r\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-1851/agnhost-master-55h84 to jerma-worker2\n Normal Pulled 4s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Mar 9 09:23:03.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1851' Mar 9 09:23:03.821: INFO: stderr: "" Mar 9 09:23:03.821: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1851\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-55h84\n" Mar 9 09:23:03.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1851' Mar 9 09:23:03.913: INFO: stderr: "" Mar 9 09:23:03.913: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1851\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.106.162.11\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.64:6379\nSession Affinity: None\nEvents: \n" Mar 9 09:23:03.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 9 09:23:04.050: INFO: stderr: "" Mar 9 09:23:04.050: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:47:04 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Mon, 09 Mar 2020 09:22:56 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 09 Mar 2020 09:18:48 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 09 Mar 2020 09:18:48 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 09 Mar 2020 09:18:48 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 09 Mar 2020 09:18:48 +0000 Sun, 08 Mar 2020 14:48:18 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: 3f4950fefd574d4aaa94513c5781e5d9\n System UUID: 58a385c4-2d08-428a-9405-5e6b12d5bd17\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-6n4ms 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 18h\n kube-system coredns-6955765f44-nlwfn 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 18h\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18h\n kube-system kindnet-2glhp 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 18h\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 18h\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 18h\n kube-system kube-proxy-zmch2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18h\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 18h\n local-path-storage local-path-provisioner-85445b74d4-gpcbt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 9 09:23:04.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1851' Mar 9 09:23:04.149: INFO: stderr: "" Mar 9 09:23:04.149: INFO: stdout: "Name: kubectl-1851\nLabels: e2e-framework=kubectl\n e2e-run=81504a0c-4615-4024-ab3d-e12d1d86561b\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:23:04.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1851" for this suite. • [SLOW TEST:5.400 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1154 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":139,"skipped":2412,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:23:04.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 9 09:23:04.197: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:23:18.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-893" for this suite. • [SLOW TEST:14.065 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":140,"skipped":2427,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:23:18.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-7c6b97d8-bca9-4e0d-87f8-10a4888aa05f STEP: Creating a pod to test consume secrets Mar 9 09:23:18.335: INFO: Waiting up to 5m0s for pod "pod-secrets-4eda467c-958e-4895-8bb7-f6d9e27b17a2" in namespace "secrets-5555" to be "success or failure" Mar 9 09:23:18.373: INFO: Pod "pod-secrets-4eda467c-958e-4895-8bb7-f6d9e27b17a2": Phase="Pending", Reason="", readiness=false. Elapsed: 37.904182ms Mar 9 09:23:20.375: INFO: Pod "pod-secrets-4eda467c-958e-4895-8bb7-f6d9e27b17a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040637716s STEP: Saw pod success Mar 9 09:23:20.376: INFO: Pod "pod-secrets-4eda467c-958e-4895-8bb7-f6d9e27b17a2" satisfied condition "success or failure" Mar 9 09:23:20.377: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-4eda467c-958e-4895-8bb7-f6d9e27b17a2 container secret-volume-test: STEP: delete the pod Mar 9 09:23:20.433: INFO: Waiting for pod pod-secrets-4eda467c-958e-4895-8bb7-f6d9e27b17a2 to disappear Mar 9 09:23:20.441: INFO: Pod pod-secrets-4eda467c-958e-4895-8bb7-f6d9e27b17a2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:23:20.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5555" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2432,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:23:20.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 9 09:23:20.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1061' Mar 9 09:23:20.823: INFO: stderr: "" Mar 9 09:23:20.823: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 09:23:20.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1061' Mar 9 09:23:20.948: INFO: stderr: "" Mar 9 09:23:20.948: INFO: stdout: "update-demo-nautilus-ztvb6 update-demo-nautilus-zvv4z " Mar 9 09:23:20.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztvb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1061' Mar 9 09:23:21.029: INFO: stderr: "" Mar 9 09:23:21.029: INFO: stdout: "" Mar 9 09:23:21.029: INFO: update-demo-nautilus-ztvb6 is created but not running Mar 9 09:23:26.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1061' Mar 9 09:23:26.150: INFO: stderr: "" Mar 9 09:23:26.150: INFO: stdout: "update-demo-nautilus-ztvb6 update-demo-nautilus-zvv4z " Mar 9 09:23:26.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztvb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1061' Mar 9 09:23:26.268: INFO: stderr: "" Mar 9 09:23:26.268: INFO: stdout: "true" Mar 9 09:23:26.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztvb6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1061' Mar 9 09:23:26.340: INFO: stderr: "" Mar 9 09:23:26.340: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 09:23:26.340: INFO: validating pod update-demo-nautilus-ztvb6 Mar 9 09:23:26.344: INFO: got data: { "image": "nautilus.jpg" } Mar 9 09:23:26.344: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 09:23:26.344: INFO: update-demo-nautilus-ztvb6 is verified up and running Mar 9 09:23:26.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zvv4z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1061' Mar 9 09:23:26.422: INFO: stderr: "" Mar 9 09:23:26.422: INFO: stdout: "true" Mar 9 09:23:26.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zvv4z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1061' Mar 9 09:23:26.500: INFO: stderr: "" Mar 9 09:23:26.500: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 09:23:26.500: INFO: validating pod update-demo-nautilus-zvv4z Mar 9 09:23:26.503: INFO: got data: { "image": "nautilus.jpg" } Mar 9 09:23:26.503: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 09:23:26.503: INFO: update-demo-nautilus-zvv4z is verified up and running STEP: using delete to clean up resources Mar 9 09:23:26.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1061' Mar 9 09:23:26.583: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 09:23:26.584: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 9 09:23:26.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1061' Mar 9 09:23:26.672: INFO: stderr: "No resources found in kubectl-1061 namespace.\n" Mar 9 09:23:26.672: INFO: stdout: "" Mar 9 09:23:26.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1061 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 9 09:23:26.738: INFO: stderr: "" Mar 9 09:23:26.738: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:23:26.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1061" for this suite. • [SLOW TEST:6.293 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":142,"skipped":2447,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:23:26.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:23:42.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6789" for this suite. • [SLOW TEST:16.173 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":143,"skipped":2448,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:23:42.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:23:43.526: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 9 09:23:45.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342623, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342623, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342623, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342623, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:23:48.573: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 9 09:23:48.592: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:23:48.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-709" for this suite. STEP: Destroying namespace "webhook-709-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.788 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":144,"skipped":2462,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:23:48.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:23:52.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1360" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2483,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:23:52.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 9 09:23:52.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7949' Mar 9 09:23:53.140: INFO: stderr: "" Mar 9 09:23:53.140: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 9 09:23:54.143: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:23:54.143: INFO: Found 0 / 1 Mar 9 09:23:55.144: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:23:55.144: INFO: Found 1 / 1 Mar 9 09:23:55.144: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 9 09:23:55.148: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:23:55.148: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 9 09:23:55.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-7g4rq --namespace=kubectl-7949 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 9 09:23:55.265: INFO: stderr: "" Mar 9 09:23:55.266: INFO: stdout: "pod/agnhost-master-7g4rq patched\n" STEP: checking annotations Mar 9 09:23:55.282: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:23:55.282: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:23:55.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7949" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":146,"skipped":2491,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:23:55.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0309 09:23:56.426034 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 9 09:23:56.426: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:23:56.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6817" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":147,"skipped":2517,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:23:56.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:23:56.522: INFO: Creating deployment "test-recreate-deployment" Mar 9 09:23:56.539: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 9 09:23:56.599: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 9 09:23:58.606: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 9 09:23:58.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342636, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342636, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342636, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342636, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 9 09:24:00.611: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 9 09:24:00.616: INFO: Updating deployment test-recreate-deployment Mar 9 09:24:00.616: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 9 09:24:00.949: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5947 /apis/apps/v1/namespaces/deployment-5947/deployments/test-recreate-deployment b3c77dea-d0b8-42d0-8d1c-23848312d10f 269756 2 2020-03-09 09:23:56 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036915b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-09 09:24:00 +0000 UTC,LastTransitionTime:2020-03-09 09:24:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-09 09:24:00 +0000 UTC,LastTransitionTime:2020-03-09 09:23:56 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 9 09:24:00.951: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-5947 /apis/apps/v1/namespaces/deployment-5947/replicasets/test-recreate-deployment-5f94c574ff b8563c7a-7782-4e2b-b85f-01da8004563b 269755 1 2020-03-09 09:24:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment b3c77dea-d0b8-42d0-8d1c-23848312d10f 0xc00365dbb7 0xc00365dbb8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00365dc18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 9 09:24:00.951: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 9 09:24:00.951: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-5947 /apis/apps/v1/namespaces/deployment-5947/replicasets/test-recreate-deployment-799c574856 74f7bf8b-29b3-41bb-93f8-96a75d205c47 269742 2 2020-03-09 09:23:56 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment b3c77dea-d0b8-42d0-8d1c-23848312d10f 0xc00365dc87 0xc00365dc88}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00365dcf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 9 09:24:00.954: INFO: Pod "test-recreate-deployment-5f94c574ff-dldb8" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-dldb8 test-recreate-deployment-5f94c574ff- deployment-5947 /api/v1/namespaces/deployment-5947/pods/test-recreate-deployment-5f94c574ff-dldb8 e06f8358-101e-41a9-a886-272d42637ff6 269754 0 2020-03-09 09:24:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff b8563c7a-7782-4e2b-b85f-01da8004563b 0xc005a9c137 0xc005a9c138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-crn2l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-crn2l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-crn2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:24:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:24:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:24:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:24:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-09 09:24:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:24:00.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5947" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":148,"skipped":2541,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:24:01.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 9 09:24:01.064: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 9 09:24:01.076: INFO: Waiting for terminating namespaces to be deleted... Mar 9 09:24:01.078: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 9 09:24:01.082: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:24:01.082: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 09:24:01.082: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:24:01.082: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 09:24:01.082: INFO: test-recreate-deployment-5f94c574ff-dldb8 from deployment-5947 started at 2020-03-09 09:24:00 +0000 UTC (1 container statuses recorded) Mar 9 09:24:01.082: INFO: Container httpd ready: false, restart count 0 Mar 9 09:24:01.082: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 9 09:24:01.086: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:24:01.086: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 09:24:01.086: INFO: agnhost-master-7g4rq from kubectl-7949 started at 2020-03-09 09:23:53 +0000 UTC (1 container statuses recorded) Mar 9 09:24:01.086: INFO: Container agnhost-master ready: true, restart count 0 Mar 9 09:24:01.086: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:24:01.086: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fa987391335590], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:24:02.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3051" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":149,"skipped":2562,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:24:02.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:24:02.188: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 9 09:24:07.191: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 9 09:24:07.191: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 9 09:24:09.195: INFO: Creating deployment "test-rollover-deployment" Mar 9 09:24:09.210: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 9 09:24:11.216: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 9 09:24:11.222: INFO: Ensure that both replica sets have 1 created replica Mar 9 09:24:11.228: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 9 09:24:11.235: INFO: Updating deployment test-rollover-deployment Mar 9 09:24:11.236: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 9 09:24:13.250: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 9 09:24:13.254: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 9 09:24:13.259: INFO: all replica sets need to contain the pod-template-hash label Mar 9 09:24:13.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342652, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 9 09:24:15.265: INFO: all replica sets need to contain the pod-template-hash label Mar 9 09:24:15.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342652, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 9 09:24:17.267: INFO: all replica sets need to contain the pod-template-hash label Mar 9 09:24:17.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342652, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 9 09:24:19.268: INFO: all replica sets need to contain the pod-template-hash label Mar 9 09:24:19.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342652, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 9 09:24:21.267: INFO: all replica sets need to contain the pod-template-hash label Mar 9 09:24:21.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342652, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342649, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 9 09:24:23.264: INFO: Mar 9 09:24:23.264: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 9 09:24:23.268: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9192 /apis/apps/v1/namespaces/deployment-9192/deployments/test-rollover-deployment c1970b7e-7703-4669-b492-93030e5047ba 269960 2 2020-03-09 09:24:09 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0050e94b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-09 09:24:09 +0000 UTC,LastTransitionTime:2020-03-09 09:24:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-09 09:24:22 +0000 UTC,LastTransitionTime:2020-03-09 09:24:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 9 09:24:23.270: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-9192 /apis/apps/v1/namespaces/deployment-9192/replicasets/test-rollover-deployment-574d6dfbff 6032d017-cd82-483c-8641-458e4da8b7d0 269949 2 2020-03-09 09:24:11 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment c1970b7e-7703-4669-b492-93030e5047ba 0xc002be6e47 0xc002be6e48}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002be6eb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 9 09:24:23.270: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 9 09:24:23.270: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9192 /apis/apps/v1/namespaces/deployment-9192/replicasets/test-rollover-controller 1512568f-95af-45c3-b621-11b8289dbb23 269959 2 2020-03-09 09:24:02 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment c1970b7e-7703-4669-b492-93030e5047ba 0xc002be6d77 0xc002be6d78}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002be6dd8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 9 09:24:23.270: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9192 /apis/apps/v1/namespaces/deployment-9192/replicasets/test-rollover-deployment-f6c94f66c eddcec28-160e-40a7-8760-a84974cc2283 269902 2 2020-03-09 09:24:09 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment c1970b7e-7703-4669-b492-93030e5047ba 0xc002be6f20 0xc002be6f21}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002be6f98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 9 09:24:23.272: INFO: Pod "test-rollover-deployment-574d6dfbff-pjpgh" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-pjpgh test-rollover-deployment-574d6dfbff- deployment-9192 /api/v1/namespaces/deployment-9192/pods/test-rollover-deployment-574d6dfbff-pjpgh 27ec0085-b453-4ff6-84da-9166ae9eabe1 269917 0 2020-03-09 09:24:11 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 6032d017-cd82-483c-8641-458e4da8b7d0 0xc002be74c7 0xc002be74c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cl2n2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cl2n2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cl2n2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:24:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:24:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:24:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:24:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.71,StartTime:2020-03-09 09:24:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:24:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://c3e0f57784f390d61c8e31a85978e6f09d929d9a9433608263cd8247e0a12c4b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:24:23.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9192" for this suite. • [SLOW TEST:21.169 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":150,"skipped":2581,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:24:23.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:24:25.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4649" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2589,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:24:25.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-2fa3aa0a-d402-4cb9-96bd-559928a48e52 STEP: Creating a pod to test consume configMaps Mar 9 09:24:25.473: INFO: Waiting up to 5m0s for pod "pod-configmaps-1f467cb8-e6cd-4fdd-8074-5b2a2caf161e" in namespace "configmap-9539" to be "success or failure" Mar 9 09:24:25.476: INFO: Pod "pod-configmaps-1f467cb8-e6cd-4fdd-8074-5b2a2caf161e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.66036ms Mar 9 09:24:27.484: INFO: Pod "pod-configmaps-1f467cb8-e6cd-4fdd-8074-5b2a2caf161e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010885916s STEP: Saw pod success Mar 9 09:24:27.484: INFO: Pod "pod-configmaps-1f467cb8-e6cd-4fdd-8074-5b2a2caf161e" satisfied condition "success or failure" Mar 9 09:24:27.493: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1f467cb8-e6cd-4fdd-8074-5b2a2caf161e container configmap-volume-test: STEP: delete the pod Mar 9 09:24:27.526: INFO: Waiting for pod pod-configmaps-1f467cb8-e6cd-4fdd-8074-5b2a2caf161e to disappear Mar 9 09:24:27.530: INFO: Pod pod-configmaps-1f467cb8-e6cd-4fdd-8074-5b2a2caf161e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:24:27.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9539" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2589,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:24:27.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 9 09:24:27.595: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 9 09:24:27.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9339' Mar 9 09:24:27.886: INFO: stderr: "" Mar 9 09:24:27.886: INFO: stdout: "service/agnhost-slave created\n" Mar 9 09:24:27.886: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 9 09:24:27.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9339' Mar 9 09:24:28.169: INFO: stderr: "" Mar 9 09:24:28.169: INFO: stdout: "service/agnhost-master created\n" Mar 9 09:24:28.169: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 9 09:24:28.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9339' Mar 9 09:24:28.516: INFO: stderr: "" Mar 9 09:24:28.516: INFO: stdout: "service/frontend created\n" Mar 9 09:24:28.516: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 9 09:24:28.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9339' Mar 9 09:24:28.786: INFO: stderr: "" Mar 9 09:24:28.786: INFO: stdout: "deployment.apps/frontend created\n" Mar 9 09:24:28.786: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 9 09:24:28.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9339' Mar 9 09:24:29.045: INFO: stderr: "" Mar 9 09:24:29.045: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 9 09:24:29.046: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 9 09:24:29.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9339' Mar 9 09:24:29.554: INFO: stderr: "" Mar 9 09:24:29.554: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 9 09:24:29.554: INFO: Waiting for all frontend pods to be Running. Mar 9 09:24:34.604: INFO: Waiting for frontend to serve content. Mar 9 09:24:34.612: INFO: Trying to add a new entry to the guestbook. Mar 9 09:24:34.620: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 9 09:24:34.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9339' Mar 9 09:24:34.818: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 09:24:34.818: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 9 09:24:34.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9339' Mar 9 09:24:34.983: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 09:24:34.983: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 9 09:24:34.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9339' Mar 9 09:24:35.101: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 09:24:35.101: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 9 09:24:35.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9339' Mar 9 09:24:35.173: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 09:24:35.173: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 9 09:24:35.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9339' Mar 9 09:24:35.269: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 09:24:35.269: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 9 09:24:35.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9339' Mar 9 09:24:35.347: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 09:24:35.347: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:24:35.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9339" for this suite. • [SLOW TEST:7.863 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:386 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":153,"skipped":2609,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:24:35.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-fa5c82fe-a63e-43a8-b1de-d5293821312f STEP: Creating a pod to test consume configMaps Mar 9 09:24:35.592: INFO: Waiting up to 5m0s for pod "pod-configmaps-de1c0134-7a18-4fb5-905b-fb6c89b7b73f" in namespace "configmap-1854" to be "success or failure" Mar 9 09:24:35.596: INFO: Pod "pod-configmaps-de1c0134-7a18-4fb5-905b-fb6c89b7b73f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.989788ms Mar 9 09:24:37.600: INFO: Pod "pod-configmaps-de1c0134-7a18-4fb5-905b-fb6c89b7b73f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008126214s Mar 9 09:24:39.604: INFO: Pod "pod-configmaps-de1c0134-7a18-4fb5-905b-fb6c89b7b73f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012052911s STEP: Saw pod success Mar 9 09:24:39.604: INFO: Pod "pod-configmaps-de1c0134-7a18-4fb5-905b-fb6c89b7b73f" satisfied condition "success or failure" Mar 9 09:24:39.606: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-de1c0134-7a18-4fb5-905b-fb6c89b7b73f container configmap-volume-test: STEP: delete the pod Mar 9 09:24:39.652: INFO: Waiting for pod pod-configmaps-de1c0134-7a18-4fb5-905b-fb6c89b7b73f to disappear Mar 9 09:24:39.703: INFO: Pod pod-configmaps-de1c0134-7a18-4fb5-905b-fb6c89b7b73f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:24:39.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1854" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2623,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:24:39.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-4b01f308-b9db-4c30-86dd-c3a4cd9dd4f0 STEP: Creating a pod to test consume secrets Mar 9 09:24:39.787: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c2fa3b25-cd67-4028-8e8a-deeaa27f3fbc" in namespace "projected-7376" to be "success or failure" Mar 9 09:24:39.791: INFO: Pod "pod-projected-secrets-c2fa3b25-cd67-4028-8e8a-deeaa27f3fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.926923ms Mar 9 09:24:41.795: INFO: Pod "pod-projected-secrets-c2fa3b25-cd67-4028-8e8a-deeaa27f3fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00842976s Mar 9 09:24:43.799: INFO: Pod "pod-projected-secrets-c2fa3b25-cd67-4028-8e8a-deeaa27f3fbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011999108s STEP: Saw pod success Mar 9 09:24:43.799: INFO: Pod "pod-projected-secrets-c2fa3b25-cd67-4028-8e8a-deeaa27f3fbc" satisfied condition "success or failure" Mar 9 09:24:43.801: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-c2fa3b25-cd67-4028-8e8a-deeaa27f3fbc container projected-secret-volume-test: STEP: delete the pod Mar 9 09:24:43.839: INFO: Waiting for pod pod-projected-secrets-c2fa3b25-cd67-4028-8e8a-deeaa27f3fbc to disappear Mar 9 09:24:43.842: INFO: Pod pod-projected-secrets-c2fa3b25-cd67-4028-8e8a-deeaa27f3fbc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:24:43.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7376" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2631,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:24:43.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:24:45.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5083" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2647,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:24:45.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:24:46.388: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 9 09:24:48.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342686, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342686, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342686, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342686, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:24:51.433: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:24:51.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4823" for this suite. STEP: Destroying namespace "webhook-4823-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.746 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":157,"skipped":2655,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:24:51.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:24:51.843: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5b3d58cc-f29d-4e61-b1eb-2068f37f30d4", Controller:(*bool)(0xc001f0d656), BlockOwnerDeletion:(*bool)(0xc001f0d657)}} Mar 9 09:24:51.848: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"097c247a-70c0-4deb-9a0f-b4c3cd1f744c", Controller:(*bool)(0xc0005ef09a), BlockOwnerDeletion:(*bool)(0xc0005ef09b)}} Mar 9 09:24:51.912: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"bb1b6cdb-4a78-49e1-a3ab-564018581b57", Controller:(*bool)(0xc002eb1592), BlockOwnerDeletion:(*bool)(0xc002eb1593)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:24:56.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-790" for this suite. • [SLOW TEST:5.239 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":158,"skipped":2670,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:24:56.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:25:21.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5653" for this suite. • [SLOW TEST:24.429 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2674,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:25:21.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-b07454c0-906f-4fbd-8948-01285281a26b STEP: Creating a pod to test consume configMaps Mar 9 09:25:21.424: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bde67b19-df0b-4297-8bb5-de95f938791b" in namespace "projected-6140" to be "success or failure" Mar 9 09:25:21.456: INFO: Pod "pod-projected-configmaps-bde67b19-df0b-4297-8bb5-de95f938791b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.585047ms Mar 9 09:25:23.461: INFO: Pod "pod-projected-configmaps-bde67b19-df0b-4297-8bb5-de95f938791b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.037388672s STEP: Saw pod success Mar 9 09:25:23.461: INFO: Pod "pod-projected-configmaps-bde67b19-df0b-4297-8bb5-de95f938791b" satisfied condition "success or failure" Mar 9 09:25:23.464: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-bde67b19-df0b-4297-8bb5-de95f938791b container projected-configmap-volume-test: STEP: delete the pod Mar 9 09:25:23.513: INFO: Waiting for pod pod-projected-configmaps-bde67b19-df0b-4297-8bb5-de95f938791b to disappear Mar 9 09:25:23.523: INFO: Pod pod-projected-configmaps-bde67b19-df0b-4297-8bb5-de95f938791b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:25:23.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6140" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2674,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:25:23.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:25:24.499: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 9 09:25:26.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342724, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342724, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342724, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342724, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:25:29.602: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 9 09:25:33.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1452 to-be-attached-pod -i -c=container1' Mar 9 09:25:33.785: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:25:33.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1452" for this suite. STEP: Destroying namespace "webhook-1452-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.359 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":161,"skipped":2684,"failed":0} SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:25:33.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 9 09:25:33.944: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5207" to be "success or failure" Mar 9 09:25:33.955: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.789121ms Mar 9 09:25:35.966: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021974029s STEP: Saw pod success Mar 9 09:25:35.966: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 9 09:25:35.969: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 9 09:25:35.999: INFO: Waiting for pod pod-host-path-test to disappear Mar 9 09:25:36.015: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:25:36.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5207" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2686,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:25:36.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 9 09:25:36.110: INFO: Waiting up to 5m0s for pod "downward-api-2a62e3ec-a77c-4b06-8b5c-a6a876e16ad0" in namespace "downward-api-8407" to be "success or failure" Mar 9 09:25:36.140: INFO: Pod "downward-api-2a62e3ec-a77c-4b06-8b5c-a6a876e16ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 29.995046ms Mar 9 09:25:38.143: INFO: Pod "downward-api-2a62e3ec-a77c-4b06-8b5c-a6a876e16ad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.032838665s STEP: Saw pod success Mar 9 09:25:38.143: INFO: Pod "downward-api-2a62e3ec-a77c-4b06-8b5c-a6a876e16ad0" satisfied condition "success or failure" Mar 9 09:25:38.145: INFO: Trying to get logs from node jerma-worker pod downward-api-2a62e3ec-a77c-4b06-8b5c-a6a876e16ad0 container dapi-container: STEP: delete the pod Mar 9 09:25:38.181: INFO: Waiting for pod downward-api-2a62e3ec-a77c-4b06-8b5c-a6a876e16ad0 to disappear Mar 9 09:25:38.192: INFO: Pod downward-api-2a62e3ec-a77c-4b06-8b5c-a6a876e16ad0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:25:38.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8407" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2700,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:25:38.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:25:38.260: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 9 09:25:41.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7215 create -f -' Mar 9 09:25:43.112: INFO: stderr: "" Mar 9 09:25:43.112: INFO: stdout: "e2e-test-crd-publish-openapi-1304-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 9 09:25:43.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7215 delete e2e-test-crd-publish-openapi-1304-crds test-cr' Mar 9 09:25:43.444: INFO: stderr: "" Mar 9 09:25:43.444: INFO: stdout: "e2e-test-crd-publish-openapi-1304-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 9 09:25:43.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7215 apply -f -' Mar 9 09:25:43.756: INFO: stderr: "" Mar 9 09:25:43.756: INFO: stdout: "e2e-test-crd-publish-openapi-1304-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 9 09:25:43.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7215 delete e2e-test-crd-publish-openapi-1304-crds test-cr' Mar 9 09:25:43.847: INFO: stderr: "" Mar 9 09:25:43.847: INFO: stdout: "e2e-test-crd-publish-openapi-1304-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 9 09:25:43.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1304-crds' Mar 9 09:25:44.055: INFO: stderr: "" Mar 9 09:25:44.055: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1304-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:25:46.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7215" for this suite. • [SLOW TEST:8.687 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":164,"skipped":2702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:25:46.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 9 09:25:46.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 9 09:25:47.059: INFO: stderr: "" Mar 9 09:25:47.059: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:25:47.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-846" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":165,"skipped":2734,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:25:47.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-3df4bf3f-92ba-4625-b3f1-30397847d67f STEP: Creating a pod to test consume configMaps Mar 9 09:25:47.141: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-995b84a4-e695-4377-ab62-79c7df8e13ad" in namespace "projected-9339" to be "success or failure" Mar 9 09:25:47.169: INFO: Pod "pod-projected-configmaps-995b84a4-e695-4377-ab62-79c7df8e13ad": Phase="Pending", Reason="", readiness=false. Elapsed: 27.540038ms Mar 9 09:25:49.172: INFO: Pod "pod-projected-configmaps-995b84a4-e695-4377-ab62-79c7df8e13ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.030568771s STEP: Saw pod success Mar 9 09:25:49.172: INFO: Pod "pod-projected-configmaps-995b84a4-e695-4377-ab62-79c7df8e13ad" satisfied condition "success or failure" Mar 9 09:25:49.174: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-995b84a4-e695-4377-ab62-79c7df8e13ad container projected-configmap-volume-test: STEP: delete the pod Mar 9 09:25:49.215: INFO: Waiting for pod pod-projected-configmaps-995b84a4-e695-4377-ab62-79c7df8e13ad to disappear Mar 9 09:25:49.225: INFO: Pod pod-projected-configmaps-995b84a4-e695-4377-ab62-79c7df8e13ad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:25:49.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9339" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2744,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:25:49.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-594ec3b9-4298-4e4b-b9cc-47d19c31609b STEP: Creating a pod to test consume configMaps Mar 9 09:25:49.316: INFO: Waiting up to 5m0s for pod "pod-configmaps-4c7efbc0-ac4e-43cf-952f-bd9ab500610c" in namespace "configmap-9323" to be "success or failure" Mar 9 09:25:49.333: INFO: Pod "pod-configmaps-4c7efbc0-ac4e-43cf-952f-bd9ab500610c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.626101ms Mar 9 09:25:51.336: INFO: Pod "pod-configmaps-4c7efbc0-ac4e-43cf-952f-bd9ab500610c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020030228s STEP: Saw pod success Mar 9 09:25:51.336: INFO: Pod "pod-configmaps-4c7efbc0-ac4e-43cf-952f-bd9ab500610c" satisfied condition "success or failure" Mar 9 09:25:51.338: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4c7efbc0-ac4e-43cf-952f-bd9ab500610c container configmap-volume-test: STEP: delete the pod Mar 9 09:25:51.358: INFO: Waiting for pod pod-configmaps-4c7efbc0-ac4e-43cf-952f-bd9ab500610c to disappear Mar 9 09:25:51.362: INFO: Pod pod-configmaps-4c7efbc0-ac4e-43cf-952f-bd9ab500610c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:25:51.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9323" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:25:51.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 9 09:25:53.468: INFO: Pod pod-hostip-c32b3efe-9cab-4f5c-a77e-976733d686d9 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:25:53.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4926" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2815,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:25:53.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:25:53.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5253" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":169,"skipped":2821,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:25:53.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0309 09:26:03.733232 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 9 09:26:03.733: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:26:03.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4909" for this suite. • [SLOW TEST:10.087 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":170,"skipped":2829,"failed":0} [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:26:03.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:26:05.870: INFO: Waiting up to 5m0s for pod "client-envvars-7066cda6-8644-4e33-87da-25c09058e36e" in namespace "pods-1924" to be "success or failure" Mar 9 09:26:05.890: INFO: Pod "client-envvars-7066cda6-8644-4e33-87da-25c09058e36e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.827719ms Mar 9 09:26:07.894: INFO: Pod "client-envvars-7066cda6-8644-4e33-87da-25c09058e36e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023608998s STEP: Saw pod success Mar 9 09:26:07.894: INFO: Pod "client-envvars-7066cda6-8644-4e33-87da-25c09058e36e" satisfied condition "success or failure" Mar 9 09:26:07.897: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-7066cda6-8644-4e33-87da-25c09058e36e container env3cont: STEP: delete the pod Mar 9 09:26:07.962: INFO: Waiting for pod client-envvars-7066cda6-8644-4e33-87da-25c09058e36e to disappear Mar 9 09:26:07.973: INFO: Pod client-envvars-7066cda6-8644-4e33-87da-25c09058e36e no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:26:07.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1924" for this suite. •{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2829,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:26:07.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 9 09:26:12.584: INFO: Successfully updated pod "labelsupdate87a50234-1013-44ae-ad55-8c5ee464663a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:26:14.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1598" for this suite. • [SLOW TEST:6.631 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2845,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:26:14.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:26:22.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4042" for this suite. • [SLOW TEST:8.059 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":173,"skipped":2852,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:26:22.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 9 09:26:30.812: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9614 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:26:30.812: INFO: >>> kubeConfig: /root/.kube/config I0309 09:26:30.848652 6 log.go:172] (0xc0064982c0) (0xc0028d1220) Create stream I0309 09:26:30.848685 6 log.go:172] (0xc0064982c0) (0xc0028d1220) Stream added, broadcasting: 1 I0309 09:26:30.850689 6 log.go:172] (0xc0064982c0) Reply frame received for 1 I0309 09:26:30.850743 6 log.go:172] (0xc0064982c0) (0xc000fe20a0) Create stream I0309 09:26:30.850766 6 log.go:172] (0xc0064982c0) (0xc000fe20a0) Stream added, broadcasting: 3 I0309 09:26:30.851785 6 log.go:172] (0xc0064982c0) Reply frame received for 3 I0309 09:26:30.851821 6 log.go:172] (0xc0064982c0) (0xc000d31540) Create stream I0309 09:26:30.851835 6 log.go:172] (0xc0064982c0) (0xc000d31540) Stream added, broadcasting: 5 I0309 09:26:30.852794 6 log.go:172] (0xc0064982c0) Reply frame received for 5 I0309 09:26:30.917571 6 log.go:172] (0xc0064982c0) Data frame received for 3 I0309 09:26:30.917620 6 log.go:172] (0xc000fe20a0) (3) Data frame handling I0309 09:26:30.917640 6 log.go:172] (0xc000fe20a0) (3) Data frame sent I0309 09:26:30.917654 6 log.go:172] (0xc0064982c0) Data frame received for 3 I0309 09:26:30.917667 6 log.go:172] (0xc000fe20a0) (3) Data frame handling I0309 09:26:30.917710 6 log.go:172] (0xc0064982c0) Data frame received for 5 I0309 09:26:30.917749 6 log.go:172] (0xc000d31540) (5) Data frame handling I0309 09:26:30.919069 6 log.go:172] (0xc0064982c0) Data frame received for 1 I0309 09:26:30.919092 6 log.go:172] (0xc0028d1220) (1) Data frame handling I0309 09:26:30.919103 6 log.go:172] (0xc0028d1220) (1) Data frame sent I0309 09:26:30.919117 6 log.go:172] (0xc0064982c0) (0xc0028d1220) Stream removed, broadcasting: 1 I0309 09:26:30.919133 6 log.go:172] (0xc0064982c0) Go away received I0309 09:26:30.919235 6 log.go:172] (0xc0064982c0) (0xc0028d1220) Stream removed, broadcasting: 1 I0309 09:26:30.919252 6 log.go:172] (0xc0064982c0) (0xc000fe20a0) Stream removed, broadcasting: 3 I0309 09:26:30.919262 6 log.go:172] (0xc0064982c0) (0xc000d31540) Stream removed, broadcasting: 5 Mar 9 09:26:30.919: INFO: Exec stderr: "" Mar 9 09:26:30.919: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9614 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:26:30.919: INFO: >>> kubeConfig: /root/.kube/config I0309 09:26:30.952352 6 log.go:172] (0xc005b52580) (0xc000d31ae0) Create stream I0309 09:26:30.952400 6 log.go:172] (0xc005b52580) (0xc000d31ae0) Stream added, broadcasting: 1 I0309 09:26:30.953968 6 log.go:172] (0xc005b52580) Reply frame received for 1 I0309 09:26:30.953999 6 log.go:172] (0xc005b52580) (0xc000d31cc0) Create stream I0309 09:26:30.954011 6 log.go:172] (0xc005b52580) (0xc000d31cc0) Stream added, broadcasting: 3 I0309 09:26:30.954916 6 log.go:172] (0xc005b52580) Reply frame received for 3 I0309 09:26:30.954946 6 log.go:172] (0xc005b52580) (0xc0016f2be0) Create stream I0309 09:26:30.954958 6 log.go:172] (0xc005b52580) (0xc0016f2be0) Stream added, broadcasting: 5 I0309 09:26:30.955717 6 log.go:172] (0xc005b52580) Reply frame received for 5 I0309 09:26:31.017512 6 log.go:172] (0xc005b52580) Data frame received for 5 I0309 09:26:31.017550 6 log.go:172] (0xc0016f2be0) (5) Data frame handling I0309 09:26:31.017571 6 log.go:172] (0xc005b52580) Data frame received for 3 I0309 09:26:31.017584 6 log.go:172] (0xc000d31cc0) (3) Data frame handling I0309 09:26:31.017599 6 log.go:172] (0xc000d31cc0) (3) Data frame sent I0309 09:26:31.017614 6 log.go:172] (0xc005b52580) Data frame received for 3 I0309 09:26:31.017621 6 log.go:172] (0xc000d31cc0) (3) Data frame handling I0309 09:26:31.019097 6 log.go:172] (0xc005b52580) Data frame received for 1 I0309 09:26:31.019122 6 log.go:172] (0xc000d31ae0) (1) Data frame handling I0309 09:26:31.019141 6 log.go:172] (0xc000d31ae0) (1) Data frame sent I0309 09:26:31.019157 6 log.go:172] (0xc005b52580) (0xc000d31ae0) Stream removed, broadcasting: 1 I0309 09:26:31.019172 6 log.go:172] (0xc005b52580) Go away received I0309 09:26:31.019312 6 log.go:172] (0xc005b52580) (0xc000d31ae0) Stream removed, broadcasting: 1 I0309 09:26:31.019336 6 log.go:172] (0xc005b52580) (0xc000d31cc0) Stream removed, broadcasting: 3 I0309 09:26:31.019357 6 log.go:172] (0xc005b52580) (0xc0016f2be0) Stream removed, broadcasting: 5 Mar 9 09:26:31.019: INFO: Exec stderr: "" Mar 9 09:26:31.019: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9614 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:26:31.019: INFO: >>> kubeConfig: /root/.kube/config I0309 09:26:31.055436 6 log.go:172] (0xc005b52bb0) (0xc00244a1e0) Create stream I0309 09:26:31.055468 6 log.go:172] (0xc005b52bb0) (0xc00244a1e0) Stream added, broadcasting: 1 I0309 09:26:31.057030 6 log.go:172] (0xc005b52bb0) Reply frame received for 1 I0309 09:26:31.057061 6 log.go:172] (0xc005b52bb0) (0xc0016f2c80) Create stream I0309 09:26:31.057071 6 log.go:172] (0xc005b52bb0) (0xc0016f2c80) Stream added, broadcasting: 3 I0309 09:26:31.057804 6 log.go:172] (0xc005b52bb0) Reply frame received for 3 I0309 09:26:31.057835 6 log.go:172] (0xc005b52bb0) (0xc0016f2d20) Create stream I0309 09:26:31.057843 6 log.go:172] (0xc005b52bb0) (0xc0016f2d20) Stream added, broadcasting: 5 I0309 09:26:31.058771 6 log.go:172] (0xc005b52bb0) Reply frame received for 5 I0309 09:26:31.122212 6 log.go:172] (0xc005b52bb0) Data frame received for 3 I0309 09:26:31.122255 6 log.go:172] (0xc0016f2c80) (3) Data frame handling I0309 09:26:31.122309 6 log.go:172] (0xc0016f2c80) (3) Data frame sent I0309 09:26:31.122333 6 log.go:172] (0xc005b52bb0) Data frame received for 3 I0309 09:26:31.122351 6 log.go:172] (0xc0016f2c80) (3) Data frame handling I0309 09:26:31.122570 6 log.go:172] (0xc005b52bb0) Data frame received for 5 I0309 09:26:31.122593 6 log.go:172] (0xc0016f2d20) (5) Data frame handling I0309 09:26:31.124206 6 log.go:172] (0xc005b52bb0) Data frame received for 1 I0309 09:26:31.124241 6 log.go:172] (0xc00244a1e0) (1) Data frame handling I0309 09:26:31.124262 6 log.go:172] (0xc00244a1e0) (1) Data frame sent I0309 09:26:31.124282 6 log.go:172] (0xc005b52bb0) (0xc00244a1e0) Stream removed, broadcasting: 1 I0309 09:26:31.124355 6 log.go:172] (0xc005b52bb0) (0xc00244a1e0) Stream removed, broadcasting: 1 I0309 09:26:31.124375 6 log.go:172] (0xc005b52bb0) (0xc0016f2c80) Stream removed, broadcasting: 3 I0309 09:26:31.124391 6 log.go:172] (0xc005b52bb0) (0xc0016f2d20) Stream removed, broadcasting: 5 Mar 9 09:26:31.124: INFO: Exec stderr: "" Mar 9 09:26:31.124: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9614 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:26:31.124: INFO: >>> kubeConfig: /root/.kube/config I0309 09:26:31.125413 6 log.go:172] (0xc005b52bb0) Go away received I0309 09:26:31.156098 6 log.go:172] (0xc0064988f0) (0xc0028d1400) Create stream I0309 09:26:31.156130 6 log.go:172] (0xc0064988f0) (0xc0028d1400) Stream added, broadcasting: 1 I0309 09:26:31.157911 6 log.go:172] (0xc0064988f0) Reply frame received for 1 I0309 09:26:31.157953 6 log.go:172] (0xc0064988f0) (0xc000fe2640) Create stream I0309 09:26:31.157967 6 log.go:172] (0xc0064988f0) (0xc000fe2640) Stream added, broadcasting: 3 I0309 09:26:31.159158 6 log.go:172] (0xc0064988f0) Reply frame received for 3 I0309 09:26:31.159200 6 log.go:172] (0xc0064988f0) (0xc00244a320) Create stream I0309 09:26:31.159217 6 log.go:172] (0xc0064988f0) (0xc00244a320) Stream added, broadcasting: 5 I0309 09:26:31.160184 6 log.go:172] (0xc0064988f0) Reply frame received for 5 I0309 09:26:31.229481 6 log.go:172] (0xc0064988f0) Data frame received for 5 I0309 09:26:31.229523 6 log.go:172] (0xc00244a320) (5) Data frame handling I0309 09:26:31.229556 6 log.go:172] (0xc0064988f0) Data frame received for 3 I0309 09:26:31.229572 6 log.go:172] (0xc000fe2640) (3) Data frame handling I0309 09:26:31.229596 6 log.go:172] (0xc000fe2640) (3) Data frame sent I0309 09:26:31.229609 6 log.go:172] (0xc0064988f0) Data frame received for 3 I0309 09:26:31.229620 6 log.go:172] (0xc000fe2640) (3) Data frame handling I0309 09:26:31.230786 6 log.go:172] (0xc0064988f0) Data frame received for 1 I0309 09:26:31.230805 6 log.go:172] (0xc0028d1400) (1) Data frame handling I0309 09:26:31.230826 6 log.go:172] (0xc0028d1400) (1) Data frame sent I0309 09:26:31.231055 6 log.go:172] (0xc0064988f0) (0xc0028d1400) Stream removed, broadcasting: 1 I0309 09:26:31.231088 6 log.go:172] (0xc0064988f0) Go away received I0309 09:26:31.231151 6 log.go:172] (0xc0064988f0) (0xc0028d1400) Stream removed, broadcasting: 1 I0309 09:26:31.231182 6 log.go:172] (0xc0064988f0) (0xc000fe2640) Stream removed, broadcasting: 3 I0309 09:26:31.231191 6 log.go:172] (0xc0064988f0) (0xc00244a320) Stream removed, broadcasting: 5 Mar 9 09:26:31.231: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 9 09:26:31.231: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9614 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:26:31.231: INFO: >>> kubeConfig: /root/.kube/config I0309 09:26:31.258887 6 log.go:172] (0xc005c72a50) (0xc0016f3180) Create stream I0309 09:26:31.258920 6 log.go:172] (0xc005c72a50) (0xc0016f3180) Stream added, broadcasting: 1 I0309 09:26:31.260505 6 log.go:172] (0xc005c72a50) Reply frame received for 1 I0309 09:26:31.260537 6 log.go:172] (0xc005c72a50) (0xc0016f3360) Create stream I0309 09:26:31.260549 6 log.go:172] (0xc005c72a50) (0xc0016f3360) Stream added, broadcasting: 3 I0309 09:26:31.261338 6 log.go:172] (0xc005c72a50) Reply frame received for 3 I0309 09:26:31.261369 6 log.go:172] (0xc005c72a50) (0xc0016f3400) Create stream I0309 09:26:31.261385 6 log.go:172] (0xc005c72a50) (0xc0016f3400) Stream added, broadcasting: 5 I0309 09:26:31.262182 6 log.go:172] (0xc005c72a50) Reply frame received for 5 I0309 09:26:31.325141 6 log.go:172] (0xc005c72a50) Data frame received for 3 I0309 09:26:31.325178 6 log.go:172] (0xc0016f3360) (3) Data frame handling I0309 09:26:31.325189 6 log.go:172] (0xc0016f3360) (3) Data frame sent I0309 09:26:31.325201 6 log.go:172] (0xc005c72a50) Data frame received for 3 I0309 09:26:31.325212 6 log.go:172] (0xc0016f3360) (3) Data frame handling I0309 09:26:31.325232 6 log.go:172] (0xc005c72a50) Data frame received for 5 I0309 09:26:31.325242 6 log.go:172] (0xc0016f3400) (5) Data frame handling I0309 09:26:31.326239 6 log.go:172] (0xc005c72a50) Data frame received for 1 I0309 09:26:31.326270 6 log.go:172] (0xc0016f3180) (1) Data frame handling I0309 09:26:31.326286 6 log.go:172] (0xc0016f3180) (1) Data frame sent I0309 09:26:31.326419 6 log.go:172] (0xc005c72a50) (0xc0016f3180) Stream removed, broadcasting: 1 I0309 09:26:31.326447 6 log.go:172] (0xc005c72a50) Go away received I0309 09:26:31.326515 6 log.go:172] (0xc005c72a50) (0xc0016f3180) Stream removed, broadcasting: 1 I0309 09:26:31.326536 6 log.go:172] (0xc005c72a50) (0xc0016f3360) Stream removed, broadcasting: 3 I0309 09:26:31.326551 6 log.go:172] (0xc005c72a50) (0xc0016f3400) Stream removed, broadcasting: 5 Mar 9 09:26:31.326: INFO: Exec stderr: "" Mar 9 09:26:31.326: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9614 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:26:31.326: INFO: >>> kubeConfig: /root/.kube/config I0309 09:26:31.369499 6 log.go:172] (0xc006498f20) (0xc0028d1680) Create stream I0309 09:26:31.369533 6 log.go:172] (0xc006498f20) (0xc0028d1680) Stream added, broadcasting: 1 I0309 09:26:31.371700 6 log.go:172] (0xc006498f20) Reply frame received for 1 I0309 09:26:31.371728 6 log.go:172] (0xc006498f20) (0xc000a07720) Create stream I0309 09:26:31.371734 6 log.go:172] (0xc006498f20) (0xc000a07720) Stream added, broadcasting: 3 I0309 09:26:31.372384 6 log.go:172] (0xc006498f20) Reply frame received for 3 I0309 09:26:31.372405 6 log.go:172] (0xc006498f20) (0xc000a07ae0) Create stream I0309 09:26:31.372414 6 log.go:172] (0xc006498f20) (0xc000a07ae0) Stream added, broadcasting: 5 I0309 09:26:31.373273 6 log.go:172] (0xc006498f20) Reply frame received for 5 I0309 09:26:31.424922 6 log.go:172] (0xc006498f20) Data frame received for 3 I0309 09:26:31.424954 6 log.go:172] (0xc000a07720) (3) Data frame handling I0309 09:26:31.424996 6 log.go:172] (0xc000a07720) (3) Data frame sent I0309 09:26:31.425011 6 log.go:172] (0xc006498f20) Data frame received for 3 I0309 09:26:31.425022 6 log.go:172] (0xc000a07720) (3) Data frame handling I0309 09:26:31.425156 6 log.go:172] (0xc006498f20) Data frame received for 5 I0309 09:26:31.425179 6 log.go:172] (0xc000a07ae0) (5) Data frame handling I0309 09:26:31.426389 6 log.go:172] (0xc006498f20) Data frame received for 1 I0309 09:26:31.426406 6 log.go:172] (0xc0028d1680) (1) Data frame handling I0309 09:26:31.426416 6 log.go:172] (0xc0028d1680) (1) Data frame sent I0309 09:26:31.426427 6 log.go:172] (0xc006498f20) (0xc0028d1680) Stream removed, broadcasting: 1 I0309 09:26:31.426499 6 log.go:172] (0xc006498f20) (0xc0028d1680) Stream removed, broadcasting: 1 I0309 09:26:31.426519 6 log.go:172] (0xc006498f20) (0xc000a07720) Stream removed, broadcasting: 3 I0309 09:26:31.426650 6 log.go:172] (0xc006498f20) (0xc000a07ae0) Stream removed, broadcasting: 5 Mar 9 09:26:31.426: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true I0309 09:26:31.426937 6 log.go:172] (0xc006498f20) Go away received Mar 9 09:26:31.426: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9614 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:26:31.426: INFO: >>> kubeConfig: /root/.kube/config I0309 09:26:31.453613 6 log.go:172] (0xc005c73080) (0xc0016f35e0) Create stream I0309 09:26:31.453639 6 log.go:172] (0xc005c73080) (0xc0016f35e0) Stream added, broadcasting: 1 I0309 09:26:31.455164 6 log.go:172] (0xc005c73080) Reply frame received for 1 I0309 09:26:31.455191 6 log.go:172] (0xc005c73080) (0xc0016f3680) Create stream I0309 09:26:31.455200 6 log.go:172] (0xc005c73080) (0xc0016f3680) Stream added, broadcasting: 3 I0309 09:26:31.455905 6 log.go:172] (0xc005c73080) Reply frame received for 3 I0309 09:26:31.455930 6 log.go:172] (0xc005c73080) (0xc00244a3c0) Create stream I0309 09:26:31.455939 6 log.go:172] (0xc005c73080) (0xc00244a3c0) Stream added, broadcasting: 5 I0309 09:26:31.456651 6 log.go:172] (0xc005c73080) Reply frame received for 5 I0309 09:26:31.537906 6 log.go:172] (0xc005c73080) Data frame received for 3 I0309 09:26:31.537938 6 log.go:172] (0xc0016f3680) (3) Data frame handling I0309 09:26:31.537949 6 log.go:172] (0xc0016f3680) (3) Data frame sent I0309 09:26:31.537958 6 log.go:172] (0xc005c73080) Data frame received for 3 I0309 09:26:31.537965 6 log.go:172] (0xc0016f3680) (3) Data frame handling I0309 09:26:31.538016 6 log.go:172] (0xc005c73080) Data frame received for 5 I0309 09:26:31.538034 6 log.go:172] (0xc00244a3c0) (5) Data frame handling I0309 09:26:31.539237 6 log.go:172] (0xc005c73080) Data frame received for 1 I0309 09:26:31.539255 6 log.go:172] (0xc0016f35e0) (1) Data frame handling I0309 09:26:31.539264 6 log.go:172] (0xc0016f35e0) (1) Data frame sent I0309 09:26:31.539277 6 log.go:172] (0xc005c73080) (0xc0016f35e0) Stream removed, broadcasting: 1 I0309 09:26:31.539329 6 log.go:172] (0xc005c73080) Go away received I0309 09:26:31.539369 6 log.go:172] (0xc005c73080) (0xc0016f35e0) Stream removed, broadcasting: 1 I0309 09:26:31.539381 6 log.go:172] (0xc005c73080) (0xc0016f3680) Stream removed, broadcasting: 3 I0309 09:26:31.539394 6 log.go:172] (0xc005c73080) (0xc00244a3c0) Stream removed, broadcasting: 5 Mar 9 09:26:31.539: INFO: Exec stderr: "" Mar 9 09:26:31.539: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9614 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:26:31.539: INFO: >>> kubeConfig: /root/.kube/config I0309 09:26:31.567326 6 log.go:172] (0xc006c2c2c0) (0xc000fe2e60) Create stream I0309 09:26:31.567355 6 log.go:172] (0xc006c2c2c0) (0xc000fe2e60) Stream added, broadcasting: 1 I0309 09:26:31.574736 6 log.go:172] (0xc006c2c2c0) Reply frame received for 1 I0309 09:26:31.574777 6 log.go:172] (0xc006c2c2c0) (0xc0016f2140) Create stream I0309 09:26:31.574789 6 log.go:172] (0xc006c2c2c0) (0xc0016f2140) Stream added, broadcasting: 3 I0309 09:26:31.575657 6 log.go:172] (0xc006c2c2c0) Reply frame received for 3 I0309 09:26:31.575686 6 log.go:172] (0xc006c2c2c0) (0xc000a06320) Create stream I0309 09:26:31.575700 6 log.go:172] (0xc006c2c2c0) (0xc000a06320) Stream added, broadcasting: 5 I0309 09:26:31.576620 6 log.go:172] (0xc006c2c2c0) Reply frame received for 5 I0309 09:26:31.650168 6 log.go:172] (0xc006c2c2c0) Data frame received for 3 I0309 09:26:31.650193 6 log.go:172] (0xc0016f2140) (3) Data frame handling I0309 09:26:31.650208 6 log.go:172] (0xc0016f2140) (3) Data frame sent I0309 09:26:31.650551 6 log.go:172] (0xc006c2c2c0) Data frame received for 5 I0309 09:26:31.650574 6 log.go:172] (0xc000a06320) (5) Data frame handling I0309 09:26:31.650595 6 log.go:172] (0xc006c2c2c0) Data frame received for 3 I0309 09:26:31.650606 6 log.go:172] (0xc0016f2140) (3) Data frame handling I0309 09:26:31.653645 6 log.go:172] (0xc006c2c2c0) Data frame received for 1 I0309 09:26:31.653671 6 log.go:172] (0xc000fe2e60) (1) Data frame handling I0309 09:26:31.653684 6 log.go:172] (0xc000fe2e60) (1) Data frame sent I0309 09:26:31.653764 6 log.go:172] (0xc006c2c2c0) (0xc000fe2e60) Stream removed, broadcasting: 1 I0309 09:26:31.653838 6 log.go:172] (0xc006c2c2c0) (0xc000fe2e60) Stream removed, broadcasting: 1 I0309 09:26:31.653863 6 log.go:172] (0xc006c2c2c0) (0xc0016f2140) Stream removed, broadcasting: 3 I0309 09:26:31.653877 6 log.go:172] (0xc006c2c2c0) (0xc000a06320) Stream removed, broadcasting: 5 Mar 9 09:26:31.653: INFO: Exec stderr: "" Mar 9 09:26:31.653: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9614 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0309 09:26:31.653943 6 log.go:172] (0xc006c2c2c0) Go away received Mar 9 09:26:31.653: INFO: >>> kubeConfig: /root/.kube/config I0309 09:26:31.680920 6 log.go:172] (0xc001d96160) (0xc000a063c0) Create stream I0309 09:26:31.680953 6 log.go:172] (0xc001d96160) (0xc000a063c0) Stream added, broadcasting: 1 I0309 09:26:31.682843 6 log.go:172] (0xc001d96160) Reply frame received for 1 I0309 09:26:31.682870 6 log.go:172] (0xc001d96160) (0xc000a065a0) Create stream I0309 09:26:31.682880 6 log.go:172] (0xc001d96160) (0xc000a065a0) Stream added, broadcasting: 3 I0309 09:26:31.683660 6 log.go:172] (0xc001d96160) Reply frame received for 3 I0309 09:26:31.683690 6 log.go:172] (0xc001d96160) (0xc000e12640) Create stream I0309 09:26:31.683701 6 log.go:172] (0xc001d96160) (0xc000e12640) Stream added, broadcasting: 5 I0309 09:26:31.684684 6 log.go:172] (0xc001d96160) Reply frame received for 5 I0309 09:26:31.749840 6 log.go:172] (0xc001d96160) Data frame received for 3 I0309 09:26:31.749867 6 log.go:172] (0xc000a065a0) (3) Data frame handling I0309 09:26:31.749887 6 log.go:172] (0xc000a065a0) (3) Data frame sent I0309 09:26:31.749924 6 log.go:172] (0xc001d96160) Data frame received for 3 I0309 09:26:31.749936 6 log.go:172] (0xc000a065a0) (3) Data frame handling I0309 09:26:31.749971 6 log.go:172] (0xc001d96160) Data frame received for 5 I0309 09:26:31.750004 6 log.go:172] (0xc000e12640) (5) Data frame handling I0309 09:26:31.751643 6 log.go:172] (0xc001d96160) Data frame received for 1 I0309 09:26:31.751674 6 log.go:172] (0xc000a063c0) (1) Data frame handling I0309 09:26:31.751708 6 log.go:172] (0xc000a063c0) (1) Data frame sent I0309 09:26:31.751733 6 log.go:172] (0xc001d96160) (0xc000a063c0) Stream removed, broadcasting: 1 I0309 09:26:31.751750 6 log.go:172] (0xc001d96160) Go away received I0309 09:26:31.751875 6 log.go:172] (0xc001d96160) (0xc000a063c0) Stream removed, broadcasting: 1 I0309 09:26:31.751894 6 log.go:172] (0xc001d96160) (0xc000a065a0) Stream removed, broadcasting: 3 I0309 09:26:31.751910 6 log.go:172] (0xc001d96160) (0xc000e12640) Stream removed, broadcasting: 5 Mar 9 09:26:31.751: INFO: Exec stderr: "" Mar 9 09:26:31.751: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9614 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:26:31.751: INFO: >>> kubeConfig: /root/.kube/config I0309 09:26:31.780167 6 log.go:172] (0xc00299a000) (0xc00237c000) Create stream I0309 09:26:31.780194 6 log.go:172] (0xc00299a000) (0xc00237c000) Stream added, broadcasting: 1 I0309 09:26:31.782498 6 log.go:172] (0xc00299a000) Reply frame received for 1 I0309 09:26:31.782537 6 log.go:172] (0xc00299a000) (0xc0022900a0) Create stream I0309 09:26:31.782549 6 log.go:172] (0xc00299a000) (0xc0022900a0) Stream added, broadcasting: 3 I0309 09:26:31.783268 6 log.go:172] (0xc00299a000) Reply frame received for 3 I0309 09:26:31.783291 6 log.go:172] (0xc00299a000) (0xc000a06b40) Create stream I0309 09:26:31.783299 6 log.go:172] (0xc00299a000) (0xc000a06b40) Stream added, broadcasting: 5 I0309 09:26:31.784107 6 log.go:172] (0xc00299a000) Reply frame received for 5 I0309 09:26:31.853669 6 log.go:172] (0xc00299a000) Data frame received for 5 I0309 09:26:31.853697 6 log.go:172] (0xc000a06b40) (5) Data frame handling I0309 09:26:31.853713 6 log.go:172] (0xc00299a000) Data frame received for 3 I0309 09:26:31.853720 6 log.go:172] (0xc0022900a0) (3) Data frame handling I0309 09:26:31.853731 6 log.go:172] (0xc0022900a0) (3) Data frame sent I0309 09:26:31.853754 6 log.go:172] (0xc00299a000) Data frame received for 3 I0309 09:26:31.853759 6 log.go:172] (0xc0022900a0) (3) Data frame handling I0309 09:26:31.854909 6 log.go:172] (0xc00299a000) Data frame received for 1 I0309 09:26:31.854925 6 log.go:172] (0xc00237c000) (1) Data frame handling I0309 09:26:31.854939 6 log.go:172] (0xc00237c000) (1) Data frame sent I0309 09:26:31.854949 6 log.go:172] (0xc00299a000) (0xc00237c000) Stream removed, broadcasting: 1 I0309 09:26:31.854988 6 log.go:172] (0xc00299a000) Go away received I0309 09:26:31.855013 6 log.go:172] (0xc00299a000) (0xc00237c000) Stream removed, broadcasting: 1 I0309 09:26:31.855078 6 log.go:172] (0xc00299a000) (0xc0022900a0) Stream removed, broadcasting: 3 I0309 09:26:31.855087 6 log.go:172] (0xc00299a000) (0xc000a06b40) Stream removed, broadcasting: 5 Mar 9 09:26:31.855: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:26:31.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9614" for this suite. • [SLOW TEST:9.189 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2869,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:26:31.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:26:32.359: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:26:35.415: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:26:35.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9225-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:26:36.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4841" for this suite. STEP: Destroying namespace "webhook-4841-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":175,"skipped":2875,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:26:36.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:26:38.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-305" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2881,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:26:39.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0309 09:26:49.237140 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 9 09:26:49.237: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:26:49.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3658" for this suite. • [SLOW TEST:10.241 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":177,"skipped":2882,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:26:49.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:27:00.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9420" for this suite. • [SLOW TEST:11.167 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":178,"skipped":2886,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:27:00.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 9 09:27:03.018: INFO: Successfully updated pod "annotationupdate131a5c9c-0011-40d3-832b-e6a15225ce23" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:27:05.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9911" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:27:05.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-cc936871-a485-486a-8861-3e1a2b05c8a8 in namespace container-probe-5757 Mar 9 09:27:09.185: INFO: Started pod busybox-cc936871-a485-486a-8861-3e1a2b05c8a8 in namespace container-probe-5757 STEP: checking the pod's current state and verifying that restartCount is present Mar 9 09:27:09.189: INFO: Initial restart count of pod busybox-cc936871-a485-486a-8861-3e1a2b05c8a8 is 0 Mar 9 09:27:59.287: INFO: Restart count of pod container-probe-5757/busybox-cc936871-a485-486a-8861-3e1a2b05c8a8 is now 1 (50.098101495s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:27:59.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5757" for this suite. • [SLOW TEST:54.258 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2920,"failed":0} [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:27:59.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 9 09:27:59.459: INFO: Waiting up to 5m0s for pod "pod-beff7a89-b6cc-49a4-968d-754945cfd3d9" in namespace "emptydir-9100" to be "success or failure" Mar 9 09:27:59.462: INFO: Pod "pod-beff7a89-b6cc-49a4-968d-754945cfd3d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.167653ms Mar 9 09:28:01.466: INFO: Pod "pod-beff7a89-b6cc-49a4-968d-754945cfd3d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007130902s STEP: Saw pod success Mar 9 09:28:01.466: INFO: Pod "pod-beff7a89-b6cc-49a4-968d-754945cfd3d9" satisfied condition "success or failure" Mar 9 09:28:01.469: INFO: Trying to get logs from node jerma-worker pod pod-beff7a89-b6cc-49a4-968d-754945cfd3d9 container test-container: STEP: delete the pod Mar 9 09:28:01.524: INFO: Waiting for pod pod-beff7a89-b6cc-49a4-968d-754945cfd3d9 to disappear Mar 9 09:28:01.535: INFO: Pod pod-beff7a89-b6cc-49a4-968d-754945cfd3d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:28:01.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9100" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:28:01.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-kpqj STEP: Creating a pod to test atomic-volume-subpath Mar 9 09:28:01.609: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kpqj" in namespace "subpath-2350" to be "success or failure" Mar 9 09:28:01.613: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.805197ms Mar 9 09:28:03.617: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Running", Reason="", readiness=true. Elapsed: 2.007646729s Mar 9 09:28:05.621: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Running", Reason="", readiness=true. Elapsed: 4.011719909s Mar 9 09:28:07.625: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Running", Reason="", readiness=true. Elapsed: 6.016390356s Mar 9 09:28:09.629: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Running", Reason="", readiness=true. Elapsed: 8.020245396s Mar 9 09:28:11.633: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Running", Reason="", readiness=true. Elapsed: 10.024148259s Mar 9 09:28:13.637: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Running", Reason="", readiness=true. Elapsed: 12.027798384s Mar 9 09:28:15.640: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Running", Reason="", readiness=true. Elapsed: 14.031246693s Mar 9 09:28:17.679: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Running", Reason="", readiness=true. Elapsed: 16.06975171s Mar 9 09:28:19.683: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Running", Reason="", readiness=true. Elapsed: 18.073755691s Mar 9 09:28:21.687: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Running", Reason="", readiness=true. Elapsed: 20.077750788s Mar 9 09:28:23.691: INFO: Pod "pod-subpath-test-secret-kpqj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.081539812s STEP: Saw pod success Mar 9 09:28:23.691: INFO: Pod "pod-subpath-test-secret-kpqj" satisfied condition "success or failure" Mar 9 09:28:23.693: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-kpqj container test-container-subpath-secret-kpqj: STEP: delete the pod Mar 9 09:28:23.733: INFO: Waiting for pod pod-subpath-test-secret-kpqj to disappear Mar 9 09:28:23.744: INFO: Pod pod-subpath-test-secret-kpqj no longer exists STEP: Deleting pod pod-subpath-test-secret-kpqj Mar 9 09:28:23.744: INFO: Deleting pod "pod-subpath-test-secret-kpqj" in namespace "subpath-2350" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:28:23.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2350" for this suite. • [SLOW TEST:22.214 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":182,"skipped":2946,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:28:23.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:28:23.860: INFO: Creating ReplicaSet my-hostname-basic-56020daf-151c-4910-b268-9614b33dcb46 Mar 9 09:28:23.900: INFO: Pod name my-hostname-basic-56020daf-151c-4910-b268-9614b33dcb46: Found 0 pods out of 1 Mar 9 09:28:28.906: INFO: Pod name my-hostname-basic-56020daf-151c-4910-b268-9614b33dcb46: Found 1 pods out of 1 Mar 9 09:28:28.906: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-56020daf-151c-4910-b268-9614b33dcb46" is running Mar 9 09:28:28.910: INFO: Pod "my-hostname-basic-56020daf-151c-4910-b268-9614b33dcb46-72mnl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 09:28:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 09:28:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 09:28:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 09:28:23 +0000 UTC Reason: Message:}]) Mar 9 09:28:28.910: INFO: Trying to dial the pod Mar 9 09:28:33.926: INFO: Controller my-hostname-basic-56020daf-151c-4910-b268-9614b33dcb46: Got expected result from replica 1 [my-hostname-basic-56020daf-151c-4910-b268-9614b33dcb46-72mnl]: "my-hostname-basic-56020daf-151c-4910-b268-9614b33dcb46-72mnl", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:28:33.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1876" for this suite. • [SLOW TEST:10.179 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":183,"skipped":2967,"failed":0} [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:28:33.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4d108533-abcf-4fee-b457-8ad961c6779e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-4d108533-abcf-4fee-b457-8ad961c6779e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:28:38.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3415" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2967,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:28:38.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:28:38.328: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 9 09:28:40.418: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:28:40.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2192" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":185,"skipped":2986,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:28:40.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6623.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6623.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6623.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6623.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6623.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6623.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 09:28:44.619: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:44.623: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:44.627: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:44.630: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:44.640: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:44.643: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:44.646: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:44.650: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:44.655: INFO: Lookups using dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local] Mar 9 09:28:49.661: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:49.664: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:49.667: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:49.669: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:49.677: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:49.680: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:49.683: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:49.685: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:49.694: INFO: Lookups using dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local] Mar 9 09:28:54.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:54.663: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:54.665: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:54.668: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:54.676: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:54.679: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:54.682: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:54.685: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:54.691: INFO: Lookups using dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local] Mar 9 09:28:59.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:59.664: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:59.667: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:59.670: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:59.679: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:59.682: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:59.685: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:59.688: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:28:59.695: INFO: Lookups using dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local] Mar 9 09:29:04.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:04.663: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:04.666: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:04.669: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:04.679: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:04.682: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:04.685: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:04.688: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:04.694: INFO: Lookups using dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local] Mar 9 09:29:09.659: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:09.662: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:09.665: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:09.668: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:09.676: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:09.679: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:09.681: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:09.684: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local from pod dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087: the server could not find the requested resource (get pods dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087) Mar 9 09:29:09.689: INFO: Lookups using dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6623.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6623.svc.cluster.local jessie_udp@dns-test-service-2.dns-6623.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6623.svc.cluster.local] Mar 9 09:29:14.691: INFO: DNS probes using dns-6623/dns-test-d2bd2571-c0e3-4916-8251-d60a517dc087 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:29:14.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6623" for this suite. • [SLOW TEST:34.390 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":186,"skipped":3007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:29:14.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:29:14.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 9 09:29:15.048: INFO: stderr: "" Mar 9 09:29:15.048: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-03-09T08:24:23Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:29:15.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5677" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":187,"skipped":3046,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:29:15.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:29:32.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3155" for this suite. • [SLOW TEST:17.218 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":188,"skipped":3057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:29:32.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:29:33.151: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:29:36.197: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:29:36.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2302" for this suite. STEP: Destroying namespace "webhook-2302-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":189,"skipped":3089,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:29:36.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 9 09:29:36.474: INFO: Waiting up to 5m0s for pod "pod-678f4d43-7d77-4c0b-978c-fc0f8648c615" in namespace "emptydir-2127" to be "success or failure" Mar 9 09:29:36.478: INFO: Pod "pod-678f4d43-7d77-4c0b-978c-fc0f8648c615": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095003ms Mar 9 09:29:38.481: INFO: Pod "pod-678f4d43-7d77-4c0b-978c-fc0f8648c615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00747444s STEP: Saw pod success Mar 9 09:29:38.481: INFO: Pod "pod-678f4d43-7d77-4c0b-978c-fc0f8648c615" satisfied condition "success or failure" Mar 9 09:29:38.484: INFO: Trying to get logs from node jerma-worker pod pod-678f4d43-7d77-4c0b-978c-fc0f8648c615 container test-container: STEP: delete the pod Mar 9 09:29:38.513: INFO: Waiting for pod pod-678f4d43-7d77-4c0b-978c-fc0f8648c615 to disappear Mar 9 09:29:38.522: INFO: Pod pod-678f4d43-7d77-4c0b-978c-fc0f8648c615 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:29:38.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2127" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3091,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:29:38.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 9 09:29:38.617: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5220 /api/v1/namespaces/watch-5220/configmaps/e2e-watch-test-watch-closed 8abc7a99-f1bd-40d5-b524-ff1ffbe6aa64 272736 0 2020-03-09 09:29:38 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 9 09:29:38.617: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5220 /api/v1/namespaces/watch-5220/configmaps/e2e-watch-test-watch-closed 8abc7a99-f1bd-40d5-b524-ff1ffbe6aa64 272737 0 2020-03-09 09:29:38 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 9 09:29:38.628: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5220 /api/v1/namespaces/watch-5220/configmaps/e2e-watch-test-watch-closed 8abc7a99-f1bd-40d5-b524-ff1ffbe6aa64 272738 0 2020-03-09 09:29:38 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 9 09:29:38.629: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5220 /api/v1/namespaces/watch-5220/configmaps/e2e-watch-test-watch-closed 8abc7a99-f1bd-40d5-b524-ff1ffbe6aa64 272739 0 2020-03-09 09:29:38 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:29:38.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5220" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":191,"skipped":3102,"failed":0} SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:29:38.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:29:38.741: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3024 I0309 09:29:38.754042 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3024, replica count: 1 I0309 09:29:39.804428 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0309 09:29:40.804657 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 9 09:29:40.974: INFO: Created: latency-svc-4fsg4 Mar 9 09:29:40.980: INFO: Got endpoints: latency-svc-4fsg4 [75.346254ms] Mar 9 09:29:41.015: INFO: Created: latency-svc-6ql8w Mar 9 09:29:41.024: INFO: Got endpoints: latency-svc-6ql8w [43.881414ms] Mar 9 09:29:41.046: INFO: Created: latency-svc-wbdzf Mar 9 09:29:41.055: INFO: Got endpoints: latency-svc-wbdzf [73.942321ms] Mar 9 09:29:41.105: INFO: Created: latency-svc-dpk5n Mar 9 09:29:41.110: INFO: Got endpoints: latency-svc-dpk5n [129.995654ms] Mar 9 09:29:41.135: INFO: Created: latency-svc-xvwb7 Mar 9 09:29:41.144: INFO: Got endpoints: latency-svc-xvwb7 [162.861037ms] Mar 9 09:29:41.163: INFO: Created: latency-svc-sfftj Mar 9 09:29:41.170: INFO: Got endpoints: latency-svc-sfftj [190.139455ms] Mar 9 09:29:41.250: INFO: Created: latency-svc-4lzlm Mar 9 09:29:41.251: INFO: Got endpoints: latency-svc-4lzlm [269.75707ms] Mar 9 09:29:41.285: INFO: Created: latency-svc-5kwjm Mar 9 09:29:41.291: INFO: Got endpoints: latency-svc-5kwjm [310.336519ms] Mar 9 09:29:41.315: INFO: Created: latency-svc-cv4tp Mar 9 09:29:41.320: INFO: Got endpoints: latency-svc-cv4tp [339.768679ms] Mar 9 09:29:41.387: INFO: Created: latency-svc-qps59 Mar 9 09:29:41.389: INFO: Got endpoints: latency-svc-qps59 [408.8509ms] Mar 9 09:29:41.429: INFO: Created: latency-svc-qqfhw Mar 9 09:29:41.441: INFO: Got endpoints: latency-svc-qqfhw [460.705689ms] Mar 9 09:29:41.471: INFO: Created: latency-svc-w4s5t Mar 9 09:29:41.476: INFO: Got endpoints: latency-svc-w4s5t [496.608723ms] Mar 9 09:29:41.542: INFO: Created: latency-svc-8ckkm Mar 9 09:29:41.544: INFO: Got endpoints: latency-svc-8ckkm [563.994768ms] Mar 9 09:29:41.590: INFO: Created: latency-svc-9dzjk Mar 9 09:29:41.620: INFO: Got endpoints: latency-svc-9dzjk [639.853953ms] Mar 9 09:29:41.716: INFO: Created: latency-svc-jfqtw Mar 9 09:29:41.731: INFO: Got endpoints: latency-svc-jfqtw [750.641621ms] Mar 9 09:29:41.764: INFO: Created: latency-svc-4wmsw Mar 9 09:29:41.772: INFO: Got endpoints: latency-svc-4wmsw [791.797753ms] Mar 9 09:29:41.854: INFO: Created: latency-svc-xgphl Mar 9 09:29:41.857: INFO: Got endpoints: latency-svc-xgphl [833.308168ms] Mar 9 09:29:41.909: INFO: Created: latency-svc-cskpp Mar 9 09:29:41.921: INFO: Got endpoints: latency-svc-cskpp [866.435233ms] Mar 9 09:29:42.016: INFO: Created: latency-svc-cwxv5 Mar 9 09:29:42.019: INFO: Got endpoints: latency-svc-cwxv5 [908.938842ms] Mar 9 09:29:42.083: INFO: Created: latency-svc-zrw9l Mar 9 09:29:42.091: INFO: Got endpoints: latency-svc-zrw9l [946.99004ms] Mar 9 09:29:42.153: INFO: Created: latency-svc-hjfw6 Mar 9 09:29:42.163: INFO: Got endpoints: latency-svc-hjfw6 [992.35119ms] Mar 9 09:29:42.183: INFO: Created: latency-svc-ttczg Mar 9 09:29:42.223: INFO: Got endpoints: latency-svc-ttczg [972.74271ms] Mar 9 09:29:42.293: INFO: Created: latency-svc-m966w Mar 9 09:29:42.316: INFO: Got endpoints: latency-svc-m966w [1.024633262s] Mar 9 09:29:42.366: INFO: Created: latency-svc-dqw2j Mar 9 09:29:42.410: INFO: Got endpoints: latency-svc-dqw2j [1.089750855s] Mar 9 09:29:42.442: INFO: Created: latency-svc-slmb9 Mar 9 09:29:42.453: INFO: Got endpoints: latency-svc-slmb9 [1.063870925s] Mar 9 09:29:42.495: INFO: Created: latency-svc-4jjj7 Mar 9 09:29:42.507: INFO: Got endpoints: latency-svc-4jjj7 [1.066244538s] Mar 9 09:29:42.573: INFO: Created: latency-svc-smcjv Mar 9 09:29:42.579: INFO: Got endpoints: latency-svc-smcjv [1.102656028s] Mar 9 09:29:42.610: INFO: Created: latency-svc-gm54c Mar 9 09:29:42.622: INFO: Got endpoints: latency-svc-gm54c [1.077263182s] Mar 9 09:29:42.709: INFO: Created: latency-svc-bbkzs Mar 9 09:29:42.717: INFO: Got endpoints: latency-svc-bbkzs [1.09705064s] Mar 9 09:29:42.742: INFO: Created: latency-svc-lrcwp Mar 9 09:29:42.748: INFO: Got endpoints: latency-svc-lrcwp [1.016627872s] Mar 9 09:29:42.777: INFO: Created: latency-svc-dgfr2 Mar 9 09:29:42.780: INFO: Got endpoints: latency-svc-dgfr2 [1.007470291s] Mar 9 09:29:42.809: INFO: Created: latency-svc-q8x5h Mar 9 09:29:42.849: INFO: Got endpoints: latency-svc-q8x5h [991.469403ms] Mar 9 09:29:42.881: INFO: Created: latency-svc-bzfp8 Mar 9 09:29:42.883: INFO: Got endpoints: latency-svc-bzfp8 [962.08529ms] Mar 9 09:29:42.933: INFO: Created: latency-svc-sw84l Mar 9 09:29:42.946: INFO: Got endpoints: latency-svc-sw84l [927.045367ms] Mar 9 09:29:43.004: INFO: Created: latency-svc-pcf2g Mar 9 09:29:43.016: INFO: Got endpoints: latency-svc-pcf2g [925.602572ms] Mar 9 09:29:43.049: INFO: Created: latency-svc-8zmml Mar 9 09:29:43.052: INFO: Got endpoints: latency-svc-8zmml [889.302468ms] Mar 9 09:29:43.079: INFO: Created: latency-svc-gbq4h Mar 9 09:29:43.088: INFO: Got endpoints: latency-svc-gbq4h [865.013752ms] Mar 9 09:29:43.140: INFO: Created: latency-svc-2dgvc Mar 9 09:29:43.149: INFO: Got endpoints: latency-svc-2dgvc [833.095229ms] Mar 9 09:29:43.182: INFO: Created: latency-svc-t2bv4 Mar 9 09:29:43.205: INFO: Got endpoints: latency-svc-t2bv4 [794.937139ms] Mar 9 09:29:43.234: INFO: Created: latency-svc-m7fpm Mar 9 09:29:43.284: INFO: Got endpoints: latency-svc-m7fpm [831.040156ms] Mar 9 09:29:43.324: INFO: Created: latency-svc-pw77p Mar 9 09:29:43.332: INFO: Got endpoints: latency-svc-pw77p [825.166489ms] Mar 9 09:29:43.355: INFO: Created: latency-svc-jhjsb Mar 9 09:29:43.361: INFO: Got endpoints: latency-svc-jhjsb [781.521439ms] Mar 9 09:29:43.384: INFO: Created: latency-svc-9mshq Mar 9 09:29:43.422: INFO: Got endpoints: latency-svc-9mshq [800.402118ms] Mar 9 09:29:43.449: INFO: Created: latency-svc-fbjg5 Mar 9 09:29:43.457: INFO: Got endpoints: latency-svc-fbjg5 [739.92593ms] Mar 9 09:29:43.485: INFO: Created: latency-svc-flht2 Mar 9 09:29:43.494: INFO: Got endpoints: latency-svc-flht2 [746.029727ms] Mar 9 09:29:43.565: INFO: Created: latency-svc-m9h69 Mar 9 09:29:43.586: INFO: Got endpoints: latency-svc-m9h69 [805.940563ms] Mar 9 09:29:43.617: INFO: Created: latency-svc-7pngr Mar 9 09:29:43.639: INFO: Got endpoints: latency-svc-7pngr [789.865463ms] Mar 9 09:29:43.709: INFO: Created: latency-svc-5swqz Mar 9 09:29:43.729: INFO: Got endpoints: latency-svc-5swqz [845.9922ms] Mar 9 09:29:43.773: INFO: Created: latency-svc-vh67f Mar 9 09:29:43.783: INFO: Got endpoints: latency-svc-vh67f [837.431401ms] Mar 9 09:29:43.857: INFO: Created: latency-svc-hvshs Mar 9 09:29:43.867: INFO: Got endpoints: latency-svc-hvshs [851.004691ms] Mar 9 09:29:43.913: INFO: Created: latency-svc-dpqd6 Mar 9 09:29:43.916: INFO: Got endpoints: latency-svc-dpqd6 [863.456206ms] Mar 9 09:29:43.985: INFO: Created: latency-svc-9vfr7 Mar 9 09:29:43.987: INFO: Got endpoints: latency-svc-9vfr7 [898.55866ms] Mar 9 09:29:44.037: INFO: Created: latency-svc-dd264 Mar 9 09:29:44.055: INFO: Got endpoints: latency-svc-dd264 [905.902001ms] Mar 9 09:29:44.135: INFO: Created: latency-svc-9rsfp Mar 9 09:29:44.152: INFO: Got endpoints: latency-svc-9rsfp [947.060623ms] Mar 9 09:29:44.195: INFO: Created: latency-svc-dxvzm Mar 9 09:29:44.205: INFO: Got endpoints: latency-svc-dxvzm [921.332773ms] Mar 9 09:29:44.284: INFO: Created: latency-svc-s97mj Mar 9 09:29:44.291: INFO: Got endpoints: latency-svc-s97mj [958.612869ms] Mar 9 09:29:44.315: INFO: Created: latency-svc-qkhtp Mar 9 09:29:44.326: INFO: Got endpoints: latency-svc-qkhtp [965.648739ms] Mar 9 09:29:44.357: INFO: Created: latency-svc-glsbj Mar 9 09:29:44.362: INFO: Got endpoints: latency-svc-glsbj [940.250537ms] Mar 9 09:29:44.459: INFO: Created: latency-svc-fgvxn Mar 9 09:29:44.471: INFO: Got endpoints: latency-svc-fgvxn [1.013852907s] Mar 9 09:29:44.525: INFO: Created: latency-svc-2msxd Mar 9 09:29:44.538: INFO: Got endpoints: latency-svc-2msxd [1.044522696s] Mar 9 09:29:44.590: INFO: Created: latency-svc-9wtdb Mar 9 09:29:44.601: INFO: Got endpoints: latency-svc-9wtdb [1.015041996s] Mar 9 09:29:44.632: INFO: Created: latency-svc-dqgxb Mar 9 09:29:44.640: INFO: Got endpoints: latency-svc-dqgxb [1.001426335s] Mar 9 09:29:44.734: INFO: Created: latency-svc-dqffh Mar 9 09:29:44.766: INFO: Got endpoints: latency-svc-dqffh [1.037107193s] Mar 9 09:29:44.767: INFO: Created: latency-svc-6v478 Mar 9 09:29:44.795: INFO: Created: latency-svc-j9p4t Mar 9 09:29:44.799: INFO: Got endpoints: latency-svc-6v478 [1.015980531s] Mar 9 09:29:44.802: INFO: Got endpoints: latency-svc-j9p4t [934.835194ms] Mar 9 09:29:44.872: INFO: Created: latency-svc-psgs2 Mar 9 09:29:44.875: INFO: Got endpoints: latency-svc-psgs2 [959.637841ms] Mar 9 09:29:44.903: INFO: Created: latency-svc-hq4pl Mar 9 09:29:44.910: INFO: Got endpoints: latency-svc-hq4pl [922.868357ms] Mar 9 09:29:44.939: INFO: Created: latency-svc-2zdhh Mar 9 09:29:44.946: INFO: Got endpoints: latency-svc-2zdhh [891.538009ms] Mar 9 09:29:44.998: INFO: Created: latency-svc-8vrff Mar 9 09:29:45.001: INFO: Got endpoints: latency-svc-8vrff [848.574197ms] Mar 9 09:29:45.028: INFO: Created: latency-svc-6ccvq Mar 9 09:29:45.031: INFO: Got endpoints: latency-svc-6ccvq [825.327832ms] Mar 9 09:29:45.064: INFO: Created: latency-svc-rg64t Mar 9 09:29:45.068: INFO: Got endpoints: latency-svc-rg64t [776.873944ms] Mar 9 09:29:45.095: INFO: Created: latency-svc-fpzlr Mar 9 09:29:45.129: INFO: Got endpoints: latency-svc-fpzlr [802.247879ms] Mar 9 09:29:45.155: INFO: Created: latency-svc-h7xw4 Mar 9 09:29:45.164: INFO: Got endpoints: latency-svc-h7xw4 [801.875962ms] Mar 9 09:29:45.202: INFO: Created: latency-svc-vwv8c Mar 9 09:29:45.213: INFO: Got endpoints: latency-svc-vwv8c [741.273299ms] Mar 9 09:29:45.267: INFO: Created: latency-svc-tr75j Mar 9 09:29:45.293: INFO: Got endpoints: latency-svc-tr75j [754.279048ms] Mar 9 09:29:45.293: INFO: Created: latency-svc-shcw4 Mar 9 09:29:45.304: INFO: Got endpoints: latency-svc-shcw4 [702.440872ms] Mar 9 09:29:45.351: INFO: Created: latency-svc-l8jwv Mar 9 09:29:45.358: INFO: Got endpoints: latency-svc-l8jwv [717.369471ms] Mar 9 09:29:45.398: INFO: Created: latency-svc-lpr6f Mar 9 09:29:45.401: INFO: Got endpoints: latency-svc-lpr6f [634.391236ms] Mar 9 09:29:45.430: INFO: Created: latency-svc-tjdnj Mar 9 09:29:45.443: INFO: Got endpoints: latency-svc-tjdnj [643.210498ms] Mar 9 09:29:45.473: INFO: Created: latency-svc-ctv76 Mar 9 09:29:45.479: INFO: Got endpoints: latency-svc-ctv76 [676.673808ms] Mar 9 09:29:45.544: INFO: Created: latency-svc-jcvnp Mar 9 09:29:45.563: INFO: Got endpoints: latency-svc-jcvnp [687.939485ms] Mar 9 09:29:45.623: INFO: Created: latency-svc-wxwhs Mar 9 09:29:45.630: INFO: Got endpoints: latency-svc-wxwhs [720.005149ms] Mar 9 09:29:45.696: INFO: Created: latency-svc-swzmt Mar 9 09:29:45.702: INFO: Got endpoints: latency-svc-swzmt [755.380915ms] Mar 9 09:29:45.742: INFO: Created: latency-svc-sfj8v Mar 9 09:29:45.766: INFO: Got endpoints: latency-svc-sfj8v [765.395791ms] Mar 9 09:29:45.823: INFO: Created: latency-svc-mnszj Mar 9 09:29:45.835: INFO: Got endpoints: latency-svc-mnszj [803.930322ms] Mar 9 09:29:45.885: INFO: Created: latency-svc-tbhth Mar 9 09:29:45.895: INFO: Got endpoints: latency-svc-tbhth [827.234765ms] Mar 9 09:29:45.955: INFO: Created: latency-svc-z9ll6 Mar 9 09:29:46.001: INFO: Got endpoints: latency-svc-z9ll6 [872.141872ms] Mar 9 09:29:46.001: INFO: Created: latency-svc-4kmfx Mar 9 09:29:46.038: INFO: Got endpoints: latency-svc-4kmfx [873.178838ms] Mar 9 09:29:46.135: INFO: Created: latency-svc-4lhhn Mar 9 09:29:46.168: INFO: Got endpoints: latency-svc-4lhhn [955.383934ms] Mar 9 09:29:46.168: INFO: Created: latency-svc-r28bx Mar 9 09:29:46.187: INFO: Got endpoints: latency-svc-r28bx [893.983321ms] Mar 9 09:29:46.284: INFO: Created: latency-svc-zq2q2 Mar 9 09:29:46.293: INFO: Got endpoints: latency-svc-zq2q2 [989.356809ms] Mar 9 09:29:46.318: INFO: Created: latency-svc-w9vdb Mar 9 09:29:46.329: INFO: Got endpoints: latency-svc-w9vdb [971.425329ms] Mar 9 09:29:46.361: INFO: Created: latency-svc-b79jz Mar 9 09:29:46.404: INFO: Got endpoints: latency-svc-b79jz [1.003367986s] Mar 9 09:29:46.439: INFO: Created: latency-svc-h6kx9 Mar 9 09:29:46.450: INFO: Got endpoints: latency-svc-h6kx9 [1.007516253s] Mar 9 09:29:46.469: INFO: Created: latency-svc-xpm9n Mar 9 09:29:46.480: INFO: Got endpoints: latency-svc-xpm9n [1.00131159s] Mar 9 09:29:46.504: INFO: Created: latency-svc-7xlww Mar 9 09:29:46.547: INFO: Got endpoints: latency-svc-7xlww [983.889044ms] Mar 9 09:29:46.551: INFO: Created: latency-svc-snh6g Mar 9 09:29:46.557: INFO: Got endpoints: latency-svc-snh6g [926.940246ms] Mar 9 09:29:46.589: INFO: Created: latency-svc-gxnlp Mar 9 09:29:46.596: INFO: Got endpoints: latency-svc-gxnlp [893.972978ms] Mar 9 09:29:46.619: INFO: Created: latency-svc-mvxtw Mar 9 09:29:46.626: INFO: Got endpoints: latency-svc-mvxtw [859.84829ms] Mar 9 09:29:46.680: INFO: Created: latency-svc-j7cc6 Mar 9 09:29:46.689: INFO: Got endpoints: latency-svc-j7cc6 [854.659147ms] Mar 9 09:29:46.726: INFO: Created: latency-svc-4l27z Mar 9 09:29:46.741: INFO: Got endpoints: latency-svc-4l27z [845.704868ms] Mar 9 09:29:46.769: INFO: Created: latency-svc-gg96n Mar 9 09:29:46.777: INFO: Got endpoints: latency-svc-gg96n [776.335164ms] Mar 9 09:29:46.811: INFO: Created: latency-svc-wnpf5 Mar 9 09:29:46.842: INFO: Created: latency-svc-8hcw6 Mar 9 09:29:46.842: INFO: Got endpoints: latency-svc-wnpf5 [804.557018ms] Mar 9 09:29:46.850: INFO: Got endpoints: latency-svc-8hcw6 [681.740055ms] Mar 9 09:29:46.870: INFO: Created: latency-svc-mgx4t Mar 9 09:29:46.881: INFO: Got endpoints: latency-svc-mgx4t [694.127227ms] Mar 9 09:29:46.906: INFO: Created: latency-svc-4cdsx Mar 9 09:29:46.960: INFO: Got endpoints: latency-svc-4cdsx [667.41641ms] Mar 9 09:29:46.962: INFO: Created: latency-svc-2zfmb Mar 9 09:29:46.965: INFO: Got endpoints: latency-svc-2zfmb [635.603706ms] Mar 9 09:29:47.021: INFO: Created: latency-svc-62rsc Mar 9 09:29:47.037: INFO: Got endpoints: latency-svc-62rsc [632.901985ms] Mar 9 09:29:47.255: INFO: Created: latency-svc-q7hf8 Mar 9 09:29:47.258: INFO: Got endpoints: latency-svc-q7hf8 [808.086796ms] Mar 9 09:29:47.680: INFO: Created: latency-svc-mnpzw Mar 9 09:29:47.692: INFO: Got endpoints: latency-svc-mnpzw [1.211769536s] Mar 9 09:29:47.742: INFO: Created: latency-svc-zz6b9 Mar 9 09:29:47.799: INFO: Got endpoints: latency-svc-zz6b9 [1.251998597s] Mar 9 09:29:47.802: INFO: Created: latency-svc-ftgbh Mar 9 09:29:47.812: INFO: Got endpoints: latency-svc-ftgbh [1.255185114s] Mar 9 09:29:47.836: INFO: Created: latency-svc-kprhb Mar 9 09:29:47.843: INFO: Got endpoints: latency-svc-kprhb [1.246862721s] Mar 9 09:29:47.872: INFO: Created: latency-svc-pl2rq Mar 9 09:29:47.897: INFO: Got endpoints: latency-svc-pl2rq [1.270716223s] Mar 9 09:29:47.963: INFO: Created: latency-svc-wj9rj Mar 9 09:29:47.975: INFO: Got endpoints: latency-svc-wj9rj [1.285571144s] Mar 9 09:29:48.003: INFO: Created: latency-svc-zhk6m Mar 9 09:29:48.018: INFO: Got endpoints: latency-svc-zhk6m [1.277206652s] Mar 9 09:29:48.092: INFO: Created: latency-svc-qpphp Mar 9 09:29:48.131: INFO: Got endpoints: latency-svc-qpphp [1.353377822s] Mar 9 09:29:48.131: INFO: Created: latency-svc-q72cj Mar 9 09:29:48.144: INFO: Got endpoints: latency-svc-q72cj [1.302170972s] Mar 9 09:29:48.166: INFO: Created: latency-svc-hb6lb Mar 9 09:29:48.175: INFO: Got endpoints: latency-svc-hb6lb [1.324936795s] Mar 9 09:29:48.233: INFO: Created: latency-svc-jjqhf Mar 9 09:29:48.241: INFO: Got endpoints: latency-svc-jjqhf [1.360321416s] Mar 9 09:29:48.277: INFO: Created: latency-svc-pt75m Mar 9 09:29:48.311: INFO: Got endpoints: latency-svc-pt75m [1.350185771s] Mar 9 09:29:48.311: INFO: Created: latency-svc-657fx Mar 9 09:29:48.356: INFO: Got endpoints: latency-svc-657fx [1.391244426s] Mar 9 09:29:48.365: INFO: Created: latency-svc-j9kmq Mar 9 09:29:48.374: INFO: Got endpoints: latency-svc-j9kmq [1.336767322s] Mar 9 09:29:48.399: INFO: Created: latency-svc-jf2wq Mar 9 09:29:48.411: INFO: Got endpoints: latency-svc-jf2wq [1.152242661s] Mar 9 09:29:48.448: INFO: Created: latency-svc-6bg7z Mar 9 09:29:48.454: INFO: Got endpoints: latency-svc-6bg7z [761.452241ms] Mar 9 09:29:48.501: INFO: Created: latency-svc-th757 Mar 9 09:29:48.507: INFO: Got endpoints: latency-svc-th757 [707.960818ms] Mar 9 09:29:48.534: INFO: Created: latency-svc-c2rxq Mar 9 09:29:48.544: INFO: Got endpoints: latency-svc-c2rxq [731.113243ms] Mar 9 09:29:48.562: INFO: Created: latency-svc-qmfsd Mar 9 09:29:48.574: INFO: Got endpoints: latency-svc-qmfsd [731.643971ms] Mar 9 09:29:48.614: INFO: Created: latency-svc-qg84s Mar 9 09:29:48.617: INFO: Got endpoints: latency-svc-qg84s [720.143579ms] Mar 9 09:29:48.646: INFO: Created: latency-svc-fs95f Mar 9 09:29:48.647: INFO: Got endpoints: latency-svc-fs95f [672.209575ms] Mar 9 09:29:48.695: INFO: Created: latency-svc-kpcft Mar 9 09:29:48.751: INFO: Got endpoints: latency-svc-kpcft [733.080843ms] Mar 9 09:29:48.767: INFO: Created: latency-svc-gfmrv Mar 9 09:29:48.786: INFO: Got endpoints: latency-svc-gfmrv [655.179441ms] Mar 9 09:29:48.813: INFO: Created: latency-svc-k54mm Mar 9 09:29:48.823: INFO: Got endpoints: latency-svc-k54mm [678.133702ms] Mar 9 09:29:48.851: INFO: Created: latency-svc-rtzfx Mar 9 09:29:48.877: INFO: Got endpoints: latency-svc-rtzfx [702.006975ms] Mar 9 09:29:48.886: INFO: Created: latency-svc-m7bz5 Mar 9 09:29:48.901: INFO: Got endpoints: latency-svc-m7bz5 [659.327743ms] Mar 9 09:29:48.929: INFO: Created: latency-svc-f7pxk Mar 9 09:29:48.937: INFO: Got endpoints: latency-svc-f7pxk [626.445008ms] Mar 9 09:29:48.958: INFO: Created: latency-svc-ljr9p Mar 9 09:29:48.968: INFO: Got endpoints: latency-svc-ljr9p [611.541478ms] Mar 9 09:29:49.015: INFO: Created: latency-svc-979n5 Mar 9 09:29:49.041: INFO: Created: latency-svc-jrs2v Mar 9 09:29:49.041: INFO: Got endpoints: latency-svc-979n5 [667.373161ms] Mar 9 09:29:49.046: INFO: Got endpoints: latency-svc-jrs2v [635.702024ms] Mar 9 09:29:49.072: INFO: Created: latency-svc-56cfs Mar 9 09:29:49.083: INFO: Got endpoints: latency-svc-56cfs [629.104172ms] Mar 9 09:29:49.102: INFO: Created: latency-svc-7w2sx Mar 9 09:29:49.152: INFO: Got endpoints: latency-svc-7w2sx [644.650345ms] Mar 9 09:29:49.153: INFO: Created: latency-svc-lpx2j Mar 9 09:29:49.167: INFO: Got endpoints: latency-svc-lpx2j [623.393997ms] Mar 9 09:29:49.192: INFO: Created: latency-svc-mhfxm Mar 9 09:29:49.217: INFO: Created: latency-svc-66wzp Mar 9 09:29:49.217: INFO: Got endpoints: latency-svc-mhfxm [642.328252ms] Mar 9 09:29:49.223: INFO: Got endpoints: latency-svc-66wzp [606.0447ms] Mar 9 09:29:49.247: INFO: Created: latency-svc-dl2qz Mar 9 09:29:49.278: INFO: Got endpoints: latency-svc-dl2qz [630.736149ms] Mar 9 09:29:49.289: INFO: Created: latency-svc-tj8xb Mar 9 09:29:49.295: INFO: Got endpoints: latency-svc-tj8xb [544.159354ms] Mar 9 09:29:49.323: INFO: Created: latency-svc-8bkd8 Mar 9 09:29:49.332: INFO: Got endpoints: latency-svc-8bkd8 [545.741218ms] Mar 9 09:29:49.360: INFO: Created: latency-svc-q4svf Mar 9 09:29:49.362: INFO: Got endpoints: latency-svc-q4svf [539.55175ms] Mar 9 09:29:49.423: INFO: Created: latency-svc-gbptm Mar 9 09:29:49.429: INFO: Got endpoints: latency-svc-gbptm [551.98336ms] Mar 9 09:29:49.457: INFO: Created: latency-svc-p2vvn Mar 9 09:29:49.465: INFO: Got endpoints: latency-svc-p2vvn [564.172057ms] Mar 9 09:29:49.486: INFO: Created: latency-svc-kg8pb Mar 9 09:29:49.495: INFO: Got endpoints: latency-svc-kg8pb [558.122639ms] Mar 9 09:29:49.515: INFO: Created: latency-svc-q9v8r Mar 9 09:29:49.520: INFO: Got endpoints: latency-svc-q9v8r [552.31146ms] Mar 9 09:29:49.578: INFO: Created: latency-svc-h4s85 Mar 9 09:29:49.601: INFO: Got endpoints: latency-svc-h4s85 [559.13723ms] Mar 9 09:29:49.637: INFO: Created: latency-svc-dp8t4 Mar 9 09:29:49.646: INFO: Got endpoints: latency-svc-dp8t4 [599.073399ms] Mar 9 09:29:49.666: INFO: Created: latency-svc-vf6tv Mar 9 09:29:49.677: INFO: Got endpoints: latency-svc-vf6tv [594.523633ms] Mar 9 09:29:49.722: INFO: Created: latency-svc-slvkx Mar 9 09:29:49.731: INFO: Got endpoints: latency-svc-slvkx [579.308502ms] Mar 9 09:29:49.763: INFO: Created: latency-svc-qqh45 Mar 9 09:29:49.781: INFO: Got endpoints: latency-svc-qqh45 [613.608739ms] Mar 9 09:29:49.805: INFO: Created: latency-svc-bdbdw Mar 9 09:29:49.811: INFO: Got endpoints: latency-svc-bdbdw [594.416602ms] Mar 9 09:29:49.865: INFO: Created: latency-svc-qzxgb Mar 9 09:29:49.868: INFO: Got endpoints: latency-svc-qzxgb [644.563753ms] Mar 9 09:29:49.912: INFO: Created: latency-svc-r5qjj Mar 9 09:29:49.920: INFO: Got endpoints: latency-svc-r5qjj [641.394476ms] Mar 9 09:29:49.948: INFO: Created: latency-svc-jhttz Mar 9 09:29:49.997: INFO: Got endpoints: latency-svc-jhttz [701.419236ms] Mar 9 09:29:50.021: INFO: Created: latency-svc-ts4km Mar 9 09:29:50.046: INFO: Got endpoints: latency-svc-ts4km [714.562943ms] Mar 9 09:29:50.092: INFO: Created: latency-svc-r4f27 Mar 9 09:29:50.094: INFO: Got endpoints: latency-svc-r4f27 [732.071612ms] Mar 9 09:29:50.142: INFO: Created: latency-svc-k8hbz Mar 9 09:29:50.177: INFO: Got endpoints: latency-svc-k8hbz [748.068599ms] Mar 9 09:29:50.200: INFO: Created: latency-svc-vl4f7 Mar 9 09:29:50.203: INFO: Got endpoints: latency-svc-vl4f7 [738.096209ms] Mar 9 09:29:50.236: INFO: Created: latency-svc-pgdm4 Mar 9 09:29:50.279: INFO: Got endpoints: latency-svc-pgdm4 [783.414384ms] Mar 9 09:29:50.285: INFO: Created: latency-svc-mjzn6 Mar 9 09:29:50.300: INFO: Got endpoints: latency-svc-mjzn6 [779.888748ms] Mar 9 09:29:50.327: INFO: Created: latency-svc-mqpvj Mar 9 09:29:50.342: INFO: Got endpoints: latency-svc-mqpvj [741.782471ms] Mar 9 09:29:50.615: INFO: Created: latency-svc-pjt58 Mar 9 09:29:50.619: INFO: Got endpoints: latency-svc-pjt58 [973.889751ms] Mar 9 09:29:50.902: INFO: Created: latency-svc-gsq86 Mar 9 09:29:50.910: INFO: Got endpoints: latency-svc-gsq86 [1.2325697s] Mar 9 09:29:50.957: INFO: Created: latency-svc-q5f5s Mar 9 09:29:50.967: INFO: Got endpoints: latency-svc-q5f5s [1.235834486s] Mar 9 09:29:50.999: INFO: Created: latency-svc-dc4bg Mar 9 09:29:51.039: INFO: Got endpoints: latency-svc-dc4bg [1.258180202s] Mar 9 09:29:51.064: INFO: Created: latency-svc-4tkc7 Mar 9 09:29:51.070: INFO: Got endpoints: latency-svc-4tkc7 [1.258367172s] Mar 9 09:29:51.093: INFO: Created: latency-svc-jvkgz Mar 9 09:29:51.097: INFO: Got endpoints: latency-svc-jvkgz [1.229058238s] Mar 9 09:29:51.129: INFO: Created: latency-svc-v62vj Mar 9 09:29:51.177: INFO: Got endpoints: latency-svc-v62vj [1.257461331s] Mar 9 09:29:51.179: INFO: Created: latency-svc-48v6f Mar 9 09:29:51.195: INFO: Got endpoints: latency-svc-48v6f [1.198124956s] Mar 9 09:29:51.256: INFO: Created: latency-svc-672nd Mar 9 09:29:51.259: INFO: Got endpoints: latency-svc-672nd [1.212157447s] Mar 9 09:29:51.314: INFO: Created: latency-svc-jjjt6 Mar 9 09:29:51.347: INFO: Created: latency-svc-thkw4 Mar 9 09:29:51.347: INFO: Got endpoints: latency-svc-jjjt6 [1.252907467s] Mar 9 09:29:51.354: INFO: Got endpoints: latency-svc-thkw4 [1.177105228s] Mar 9 09:29:51.375: INFO: Created: latency-svc-d96xg Mar 9 09:29:51.385: INFO: Got endpoints: latency-svc-d96xg [1.181733545s] Mar 9 09:29:51.412: INFO: Created: latency-svc-rpmgd Mar 9 09:29:51.452: INFO: Got endpoints: latency-svc-rpmgd [1.172996513s] Mar 9 09:29:51.467: INFO: Created: latency-svc-rbbrq Mar 9 09:29:51.469: INFO: Got endpoints: latency-svc-rbbrq [1.169194661s] Mar 9 09:29:51.497: INFO: Created: latency-svc-pp6gf Mar 9 09:29:51.506: INFO: Got endpoints: latency-svc-pp6gf [1.163157203s] Mar 9 09:29:51.527: INFO: Created: latency-svc-ddlx5 Mar 9 09:29:51.536: INFO: Got endpoints: latency-svc-ddlx5 [916.930941ms] Mar 9 09:29:51.584: INFO: Created: latency-svc-jc4b8 Mar 9 09:29:51.603: INFO: Got endpoints: latency-svc-jc4b8 [693.279004ms] Mar 9 09:29:51.627: INFO: Created: latency-svc-w6kqn Mar 9 09:29:51.640: INFO: Got endpoints: latency-svc-w6kqn [672.301383ms] Mar 9 09:29:51.683: INFO: Created: latency-svc-sw2tq Mar 9 09:29:51.722: INFO: Got endpoints: latency-svc-sw2tq [682.652071ms] Mar 9 09:29:51.723: INFO: Created: latency-svc-78ggr Mar 9 09:29:51.741: INFO: Got endpoints: latency-svc-78ggr [671.349217ms] Mar 9 09:29:51.766: INFO: Created: latency-svc-g8zt6 Mar 9 09:29:51.768: INFO: Got endpoints: latency-svc-g8zt6 [671.112596ms] Mar 9 09:29:51.802: INFO: Created: latency-svc-bkwlq Mar 9 09:29:51.871: INFO: Got endpoints: latency-svc-bkwlq [694.178442ms] Mar 9 09:29:51.871: INFO: Created: latency-svc-xpcph Mar 9 09:29:51.899: INFO: Got endpoints: latency-svc-xpcph [704.471003ms] Mar 9 09:29:51.901: INFO: Created: latency-svc-d4dn7 Mar 9 09:29:51.923: INFO: Got endpoints: latency-svc-d4dn7 [664.084388ms] Mar 9 09:29:51.958: INFO: Created: latency-svc-9crd6 Mar 9 09:29:51.966: INFO: Got endpoints: latency-svc-9crd6 [619.124611ms] Mar 9 09:29:52.033: INFO: Created: latency-svc-fb6mq Mar 9 09:29:52.038: INFO: Got endpoints: latency-svc-fb6mq [683.988979ms] Mar 9 09:29:52.063: INFO: Created: latency-svc-k9c2t Mar 9 09:29:52.068: INFO: Got endpoints: latency-svc-k9c2t [683.601345ms] Mar 9 09:29:52.096: INFO: Created: latency-svc-wx5bd Mar 9 09:29:52.105: INFO: Got endpoints: latency-svc-wx5bd [653.205083ms] Mar 9 09:29:52.132: INFO: Created: latency-svc-lfszg Mar 9 09:29:52.188: INFO: Got endpoints: latency-svc-lfszg [719.23824ms] Mar 9 09:29:52.191: INFO: Created: latency-svc-6gv94 Mar 9 09:29:52.229: INFO: Got endpoints: latency-svc-6gv94 [723.518678ms] Mar 9 09:29:52.264: INFO: Created: latency-svc-xlvg4 Mar 9 09:29:52.274: INFO: Got endpoints: latency-svc-xlvg4 [737.807504ms] Mar 9 09:29:52.314: INFO: Created: latency-svc-snd5b Mar 9 09:29:52.317: INFO: Got endpoints: latency-svc-snd5b [713.790524ms] Mar 9 09:29:52.361: INFO: Created: latency-svc-69n2w Mar 9 09:29:52.371: INFO: Got endpoints: latency-svc-69n2w [731.174239ms] Mar 9 09:29:52.371: INFO: Latencies: [43.881414ms 73.942321ms 129.995654ms 162.861037ms 190.139455ms 269.75707ms 310.336519ms 339.768679ms 408.8509ms 460.705689ms 496.608723ms 539.55175ms 544.159354ms 545.741218ms 551.98336ms 552.31146ms 558.122639ms 559.13723ms 563.994768ms 564.172057ms 579.308502ms 594.416602ms 594.523633ms 599.073399ms 606.0447ms 611.541478ms 613.608739ms 619.124611ms 623.393997ms 626.445008ms 629.104172ms 630.736149ms 632.901985ms 634.391236ms 635.603706ms 635.702024ms 639.853953ms 641.394476ms 642.328252ms 643.210498ms 644.563753ms 644.650345ms 653.205083ms 655.179441ms 659.327743ms 664.084388ms 667.373161ms 667.41641ms 671.112596ms 671.349217ms 672.209575ms 672.301383ms 676.673808ms 678.133702ms 681.740055ms 682.652071ms 683.601345ms 683.988979ms 687.939485ms 693.279004ms 694.127227ms 694.178442ms 701.419236ms 702.006975ms 702.440872ms 704.471003ms 707.960818ms 713.790524ms 714.562943ms 717.369471ms 719.23824ms 720.005149ms 720.143579ms 723.518678ms 731.113243ms 731.174239ms 731.643971ms 732.071612ms 733.080843ms 737.807504ms 738.096209ms 739.92593ms 741.273299ms 741.782471ms 746.029727ms 748.068599ms 750.641621ms 754.279048ms 755.380915ms 761.452241ms 765.395791ms 776.335164ms 776.873944ms 779.888748ms 781.521439ms 783.414384ms 789.865463ms 791.797753ms 794.937139ms 800.402118ms 801.875962ms 802.247879ms 803.930322ms 804.557018ms 805.940563ms 808.086796ms 825.166489ms 825.327832ms 827.234765ms 831.040156ms 833.095229ms 833.308168ms 837.431401ms 845.704868ms 845.9922ms 848.574197ms 851.004691ms 854.659147ms 859.84829ms 863.456206ms 865.013752ms 866.435233ms 872.141872ms 873.178838ms 889.302468ms 891.538009ms 893.972978ms 893.983321ms 898.55866ms 905.902001ms 908.938842ms 916.930941ms 921.332773ms 922.868357ms 925.602572ms 926.940246ms 927.045367ms 934.835194ms 940.250537ms 946.99004ms 947.060623ms 955.383934ms 958.612869ms 959.637841ms 962.08529ms 965.648739ms 971.425329ms 972.74271ms 973.889751ms 983.889044ms 989.356809ms 991.469403ms 992.35119ms 1.00131159s 1.001426335s 1.003367986s 1.007470291s 1.007516253s 1.013852907s 1.015041996s 1.015980531s 1.016627872s 1.024633262s 1.037107193s 1.044522696s 1.063870925s 1.066244538s 1.077263182s 1.089750855s 1.09705064s 1.102656028s 1.152242661s 1.163157203s 1.169194661s 1.172996513s 1.177105228s 1.181733545s 1.198124956s 1.211769536s 1.212157447s 1.229058238s 1.2325697s 1.235834486s 1.246862721s 1.251998597s 1.252907467s 1.255185114s 1.257461331s 1.258180202s 1.258367172s 1.270716223s 1.277206652s 1.285571144s 1.302170972s 1.324936795s 1.336767322s 1.350185771s 1.353377822s 1.360321416s 1.391244426s] Mar 9 09:29:52.371: INFO: 50 %ile: 801.875962ms Mar 9 09:29:52.371: INFO: 90 %ile: 1.229058238s Mar 9 09:29:52.371: INFO: 99 %ile: 1.360321416s Mar 9 09:29:52.371: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:29:52.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3024" for this suite. • [SLOW TEST:13.758 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":192,"skipped":3105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:29:52.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-1befaf6a-e521-4407-87e7-f006be302a4b STEP: Creating a pod to test consume secrets Mar 9 09:29:52.498: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d8955be5-8094-4253-8bb4-85b4d7abcb2e" in namespace "projected-4530" to be "success or failure" Mar 9 09:29:52.502: INFO: Pod "pod-projected-secrets-d8955be5-8094-4253-8bb4-85b4d7abcb2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.621141ms Mar 9 09:29:54.509: INFO: Pod "pod-projected-secrets-d8955be5-8094-4253-8bb4-85b4d7abcb2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011398686s STEP: Saw pod success Mar 9 09:29:54.509: INFO: Pod "pod-projected-secrets-d8955be5-8094-4253-8bb4-85b4d7abcb2e" satisfied condition "success or failure" Mar 9 09:29:54.512: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-d8955be5-8094-4253-8bb4-85b4d7abcb2e container projected-secret-volume-test: STEP: delete the pod Mar 9 09:29:54.540: INFO: Waiting for pod pod-projected-secrets-d8955be5-8094-4253-8bb4-85b4d7abcb2e to disappear Mar 9 09:29:54.551: INFO: Pod pod-projected-secrets-d8955be5-8094-4253-8bb4-85b4d7abcb2e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:29:54.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4530" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3132,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:29:54.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:29:55.436: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 9 09:29:57.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342995, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342995, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342995, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342995, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:30:00.595: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:30:01.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8561" for this suite. STEP: Destroying namespace "webhook-8561-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.007 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":194,"skipped":3142,"failed":0} SSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:30:01.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 9 09:30:01.656: INFO: Created pod &Pod{ObjectMeta:{dns-403 dns-403 /api/v1/namespaces/dns-403/pods/dns-403 12a80863-320a-4df7-915d-ff19aaa19c7a 273688 0 2020-03-09 09:30:01 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-llgs4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-llgs4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-llgs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 9 09:30:03.664: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-403 PodName:dns-403 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:30:03.664: INFO: >>> kubeConfig: /root/.kube/config I0309 09:30:03.686327 6 log.go:172] (0xc001608370) (0xc0028d0280) Create stream I0309 09:30:03.686347 6 log.go:172] (0xc001608370) (0xc0028d0280) Stream added, broadcasting: 1 I0309 09:30:03.687399 6 log.go:172] (0xc001608370) Reply frame received for 1 I0309 09:30:03.687422 6 log.go:172] (0xc001608370) (0xc0028d03c0) Create stream I0309 09:30:03.687430 6 log.go:172] (0xc001608370) (0xc0028d03c0) Stream added, broadcasting: 3 I0309 09:30:03.687881 6 log.go:172] (0xc001608370) Reply frame received for 3 I0309 09:30:03.687897 6 log.go:172] (0xc001608370) (0xc0014d63c0) Create stream I0309 09:30:03.687905 6 log.go:172] (0xc001608370) (0xc0014d63c0) Stream added, broadcasting: 5 I0309 09:30:03.688411 6 log.go:172] (0xc001608370) Reply frame received for 5 I0309 09:30:03.739442 6 log.go:172] (0xc001608370) Data frame received for 3 I0309 09:30:03.739461 6 log.go:172] (0xc0028d03c0) (3) Data frame handling I0309 09:30:03.739473 6 log.go:172] (0xc0028d03c0) (3) Data frame sent I0309 09:30:03.739750 6 log.go:172] (0xc001608370) Data frame received for 5 I0309 09:30:03.739760 6 log.go:172] (0xc0014d63c0) (5) Data frame handling I0309 09:30:03.739836 6 log.go:172] (0xc001608370) Data frame received for 3 I0309 09:30:03.739850 6 log.go:172] (0xc0028d03c0) (3) Data frame handling I0309 09:30:03.740656 6 log.go:172] (0xc001608370) Data frame received for 1 I0309 09:30:03.740664 6 log.go:172] (0xc0028d0280) (1) Data frame handling I0309 09:30:03.740672 6 log.go:172] (0xc0028d0280) (1) Data frame sent I0309 09:30:03.740864 6 log.go:172] (0xc001608370) (0xc0028d0280) Stream removed, broadcasting: 1 I0309 09:30:03.740879 6 log.go:172] (0xc001608370) Go away received I0309 09:30:03.740952 6 log.go:172] (0xc001608370) (0xc0028d0280) Stream removed, broadcasting: 1 I0309 09:30:03.740965 6 log.go:172] (0xc001608370) (0xc0028d03c0) Stream removed, broadcasting: 3 I0309 09:30:03.740976 6 log.go:172] (0xc001608370) (0xc0014d63c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 9 09:30:03.741: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-403 PodName:dns-403 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 9 09:30:03.741: INFO: >>> kubeConfig: /root/.kube/config I0309 09:30:03.760009 6 log.go:172] (0xc002c80420) (0xc0027ec280) Create stream I0309 09:30:03.760030 6 log.go:172] (0xc002c80420) (0xc0027ec280) Stream added, broadcasting: 1 I0309 09:30:03.761299 6 log.go:172] (0xc002c80420) Reply frame received for 1 I0309 09:30:03.761321 6 log.go:172] (0xc002c80420) (0xc0028d0460) Create stream I0309 09:30:03.761327 6 log.go:172] (0xc002c80420) (0xc0028d0460) Stream added, broadcasting: 3 I0309 09:30:03.762386 6 log.go:172] (0xc002c80420) Reply frame received for 3 I0309 09:30:03.762408 6 log.go:172] (0xc002c80420) (0xc0027ec320) Create stream I0309 09:30:03.762417 6 log.go:172] (0xc002c80420) (0xc0027ec320) Stream added, broadcasting: 5 I0309 09:30:03.762972 6 log.go:172] (0xc002c80420) Reply frame received for 5 I0309 09:30:03.816208 6 log.go:172] (0xc002c80420) Data frame received for 3 I0309 09:30:03.816228 6 log.go:172] (0xc0028d0460) (3) Data frame handling I0309 09:30:03.816238 6 log.go:172] (0xc0028d0460) (3) Data frame sent I0309 09:30:03.816506 6 log.go:172] (0xc002c80420) Data frame received for 3 I0309 09:30:03.816518 6 log.go:172] (0xc0028d0460) (3) Data frame handling I0309 09:30:03.816613 6 log.go:172] (0xc002c80420) Data frame received for 5 I0309 09:30:03.816622 6 log.go:172] (0xc0027ec320) (5) Data frame handling I0309 09:30:03.817703 6 log.go:172] (0xc002c80420) Data frame received for 1 I0309 09:30:03.817714 6 log.go:172] (0xc0027ec280) (1) Data frame handling I0309 09:30:03.817719 6 log.go:172] (0xc0027ec280) (1) Data frame sent I0309 09:30:03.817782 6 log.go:172] (0xc002c80420) (0xc0027ec280) Stream removed, broadcasting: 1 I0309 09:30:03.817796 6 log.go:172] (0xc002c80420) Go away received I0309 09:30:03.817869 6 log.go:172] (0xc002c80420) (0xc0027ec280) Stream removed, broadcasting: 1 I0309 09:30:03.817882 6 log.go:172] (0xc002c80420) (0xc0028d0460) Stream removed, broadcasting: 3 I0309 09:30:03.817889 6 log.go:172] (0xc002c80420) (0xc0027ec320) Stream removed, broadcasting: 5 Mar 9 09:30:03.817: INFO: Deleting pod dns-403... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:30:03.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-403" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":195,"skipped":3147,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:30:03.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-574 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-574 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-574 Mar 9 09:30:04.229: INFO: Found 0 stateful pods, waiting for 1 Mar 9 09:30:14.236: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 9 09:30:14.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-574 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 09:30:14.795: INFO: stderr: "I0309 09:30:14.383945 2579 log.go:172] (0xc000a6ce70) (0xc0006e26e0) Create stream\nI0309 09:30:14.383987 2579 log.go:172] (0xc000a6ce70) (0xc0006e26e0) Stream added, broadcasting: 1\nI0309 09:30:14.385174 2579 log.go:172] (0xc000a6ce70) Reply frame received for 1\nI0309 09:30:14.385200 2579 log.go:172] (0xc000a6ce70) (0xc0007fe000) Create stream\nI0309 09:30:14.385211 2579 log.go:172] (0xc000a6ce70) (0xc0007fe000) Stream added, broadcasting: 3\nI0309 09:30:14.386057 2579 log.go:172] (0xc000a6ce70) Reply frame received for 3\nI0309 09:30:14.386110 2579 log.go:172] (0xc000a6ce70) (0xc0007fe0a0) Create stream\nI0309 09:30:14.386154 2579 log.go:172] (0xc000a6ce70) (0xc0007fe0a0) Stream added, broadcasting: 5\nI0309 09:30:14.386980 2579 log.go:172] (0xc000a6ce70) Reply frame received for 5\nI0309 09:30:14.450932 2579 log.go:172] (0xc000a6ce70) Data frame received for 5\nI0309 09:30:14.450949 2579 log.go:172] (0xc0007fe0a0) (5) Data frame handling\nI0309 09:30:14.450957 2579 log.go:172] (0xc0007fe0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:30:14.788877 2579 log.go:172] (0xc000a6ce70) Data frame received for 5\nI0309 09:30:14.788902 2579 log.go:172] (0xc0007fe0a0) (5) Data frame handling\nI0309 09:30:14.788920 2579 log.go:172] (0xc000a6ce70) Data frame received for 3\nI0309 09:30:14.788925 2579 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0309 09:30:14.788932 2579 log.go:172] (0xc0007fe000) (3) Data frame sent\nI0309 09:30:14.788938 2579 log.go:172] (0xc000a6ce70) Data frame received for 3\nI0309 09:30:14.788945 2579 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0309 09:30:14.792518 2579 log.go:172] (0xc000a6ce70) Data frame received for 1\nI0309 09:30:14.792538 2579 log.go:172] (0xc0006e26e0) (1) Data frame handling\nI0309 09:30:14.792549 2579 log.go:172] (0xc0006e26e0) (1) Data frame sent\nI0309 09:30:14.792557 2579 log.go:172] (0xc000a6ce70) (0xc0006e26e0) Stream removed, broadcasting: 1\nI0309 09:30:14.792567 2579 log.go:172] (0xc000a6ce70) Go away received\nI0309 09:30:14.792843 2579 log.go:172] (0xc000a6ce70) (0xc0006e26e0) Stream removed, broadcasting: 1\nI0309 09:30:14.792861 2579 log.go:172] (0xc000a6ce70) (0xc0007fe000) Stream removed, broadcasting: 3\nI0309 09:30:14.792868 2579 log.go:172] (0xc000a6ce70) (0xc0007fe0a0) Stream removed, broadcasting: 5\n" Mar 9 09:30:14.795: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 09:30:14.795: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 09:30:14.850: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 9 09:30:24.866: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 9 09:30:24.866: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 09:30:24.879: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 09:30:24.879: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:04 +0000 UTC }] Mar 9 09:30:24.879: INFO: Mar 9 09:30:24.879: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 9 09:30:25.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996126135s Mar 9 09:30:26.887: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992668045s Mar 9 09:30:27.892: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988704857s Mar 9 09:30:28.896: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.983839107s Mar 9 09:30:29.920: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97959346s Mar 9 09:30:30.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.9552282s Mar 9 09:30:31.934: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.945526771s Mar 9 09:30:32.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.941024218s Mar 9 09:30:33.943: INFO: Verifying statefulset ss doesn't scale past 3 for another 936.522676ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-574 Mar 9 09:30:34.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-574 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 09:30:35.153: INFO: stderr: "I0309 09:30:35.085545 2598 log.go:172] (0xc0008c4840) (0xc0008921e0) Create stream\nI0309 09:30:35.085586 2598 log.go:172] (0xc0008c4840) (0xc0008921e0) Stream added, broadcasting: 1\nI0309 09:30:35.087468 2598 log.go:172] (0xc0008c4840) Reply frame received for 1\nI0309 09:30:35.087498 2598 log.go:172] (0xc0008c4840) (0xc00054c500) Create stream\nI0309 09:30:35.087516 2598 log.go:172] (0xc0008c4840) (0xc00054c500) Stream added, broadcasting: 3\nI0309 09:30:35.088302 2598 log.go:172] (0xc0008c4840) Reply frame received for 3\nI0309 09:30:35.088325 2598 log.go:172] (0xc0008c4840) (0xc000635b80) Create stream\nI0309 09:30:35.088333 2598 log.go:172] (0xc0008c4840) (0xc000635b80) Stream added, broadcasting: 5\nI0309 09:30:35.088960 2598 log.go:172] (0xc0008c4840) Reply frame received for 5\nI0309 09:30:35.148912 2598 log.go:172] (0xc0008c4840) Data frame received for 5\nI0309 09:30:35.148936 2598 log.go:172] (0xc000635b80) (5) Data frame handling\nI0309 09:30:35.148948 2598 log.go:172] (0xc000635b80) (5) Data frame sent\nI0309 09:30:35.148959 2598 log.go:172] (0xc0008c4840) Data frame received for 5\nI0309 09:30:35.148974 2598 log.go:172] (0xc000635b80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 09:30:35.149006 2598 log.go:172] (0xc0008c4840) Data frame received for 3\nI0309 09:30:35.149036 2598 log.go:172] (0xc00054c500) (3) Data frame handling\nI0309 09:30:35.149053 2598 log.go:172] (0xc00054c500) (3) Data frame sent\nI0309 09:30:35.149068 2598 log.go:172] (0xc0008c4840) Data frame received for 3\nI0309 09:30:35.149078 2598 log.go:172] (0xc00054c500) (3) Data frame handling\nI0309 09:30:35.150182 2598 log.go:172] (0xc0008c4840) Data frame received for 1\nI0309 09:30:35.150211 2598 log.go:172] (0xc0008921e0) (1) Data frame handling\nI0309 09:30:35.150222 2598 log.go:172] (0xc0008921e0) (1) Data frame sent\nI0309 09:30:35.150236 2598 log.go:172] (0xc0008c4840) (0xc0008921e0) Stream removed, broadcasting: 1\nI0309 09:30:35.150458 2598 log.go:172] (0xc0008c4840) Go away received\nI0309 09:30:35.150535 2598 log.go:172] (0xc0008c4840) (0xc0008921e0) Stream removed, broadcasting: 1\nI0309 09:30:35.150551 2598 log.go:172] (0xc0008c4840) (0xc00054c500) Stream removed, broadcasting: 3\nI0309 09:30:35.150562 2598 log.go:172] (0xc0008c4840) (0xc000635b80) Stream removed, broadcasting: 5\n" Mar 9 09:30:35.153: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 09:30:35.153: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 09:30:35.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-574 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 09:30:35.356: INFO: stderr: "I0309 09:30:35.275376 2619 log.go:172] (0xc0000f46e0) (0xc0008ca000) Create stream\nI0309 09:30:35.275413 2619 log.go:172] (0xc0000f46e0) (0xc0008ca000) Stream added, broadcasting: 1\nI0309 09:30:35.278817 2619 log.go:172] (0xc0000f46e0) Reply frame received for 1\nI0309 09:30:35.278866 2619 log.go:172] (0xc0000f46e0) (0xc0006d7ea0) Create stream\nI0309 09:30:35.278883 2619 log.go:172] (0xc0000f46e0) (0xc0006d7ea0) Stream added, broadcasting: 3\nI0309 09:30:35.281365 2619 log.go:172] (0xc0000f46e0) Reply frame received for 3\nI0309 09:30:35.281385 2619 log.go:172] (0xc0000f46e0) (0xc0006d7f40) Create stream\nI0309 09:30:35.281393 2619 log.go:172] (0xc0000f46e0) (0xc0006d7f40) Stream added, broadcasting: 5\nI0309 09:30:35.281984 2619 log.go:172] (0xc0000f46e0) Reply frame received for 5\nI0309 09:30:35.352494 2619 log.go:172] (0xc0000f46e0) Data frame received for 3\nI0309 09:30:35.352529 2619 log.go:172] (0xc0006d7ea0) (3) Data frame handling\nI0309 09:30:35.352541 2619 log.go:172] (0xc0006d7ea0) (3) Data frame sent\nI0309 09:30:35.352551 2619 log.go:172] (0xc0000f46e0) Data frame received for 3\nI0309 09:30:35.352557 2619 log.go:172] (0xc0006d7ea0) (3) Data frame handling\nI0309 09:30:35.352587 2619 log.go:172] (0xc0000f46e0) Data frame received for 5\nI0309 09:30:35.352622 2619 log.go:172] (0xc0006d7f40) (5) Data frame handling\nI0309 09:30:35.352638 2619 log.go:172] (0xc0006d7f40) (5) Data frame sent\nI0309 09:30:35.352651 2619 log.go:172] (0xc0000f46e0) Data frame received for 5\nI0309 09:30:35.352660 2619 log.go:172] (0xc0006d7f40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0309 09:30:35.353776 2619 log.go:172] (0xc0000f46e0) Data frame received for 1\nI0309 09:30:35.353794 2619 log.go:172] (0xc0008ca000) (1) Data frame handling\nI0309 09:30:35.353809 2619 log.go:172] (0xc0008ca000) (1) Data frame sent\nI0309 09:30:35.353820 2619 log.go:172] (0xc0000f46e0) (0xc0008ca000) Stream removed, broadcasting: 1\nI0309 09:30:35.353829 2619 log.go:172] (0xc0000f46e0) Go away received\nI0309 09:30:35.354110 2619 log.go:172] (0xc0000f46e0) (0xc0008ca000) Stream removed, broadcasting: 1\nI0309 09:30:35.354151 2619 log.go:172] (0xc0000f46e0) (0xc0006d7ea0) Stream removed, broadcasting: 3\nI0309 09:30:35.354159 2619 log.go:172] (0xc0000f46e0) (0xc0006d7f40) Stream removed, broadcasting: 5\n" Mar 9 09:30:35.357: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 09:30:35.357: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 09:30:35.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-574 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 9 09:30:35.564: INFO: stderr: "I0309 09:30:35.499438 2641 log.go:172] (0xc000bd9810) (0xc000b4c8c0) Create stream\nI0309 09:30:35.499493 2641 log.go:172] (0xc000bd9810) (0xc000b4c8c0) Stream added, broadcasting: 1\nI0309 09:30:35.502666 2641 log.go:172] (0xc000bd9810) Reply frame received for 1\nI0309 09:30:35.502710 2641 log.go:172] (0xc000bd9810) (0xc000562780) Create stream\nI0309 09:30:35.502723 2641 log.go:172] (0xc000bd9810) (0xc000562780) Stream added, broadcasting: 3\nI0309 09:30:35.503660 2641 log.go:172] (0xc000bd9810) Reply frame received for 3\nI0309 09:30:35.503702 2641 log.go:172] (0xc000bd9810) (0xc00078f540) Create stream\nI0309 09:30:35.503726 2641 log.go:172] (0xc000bd9810) (0xc00078f540) Stream added, broadcasting: 5\nI0309 09:30:35.504485 2641 log.go:172] (0xc000bd9810) Reply frame received for 5\nI0309 09:30:35.560240 2641 log.go:172] (0xc000bd9810) Data frame received for 3\nI0309 09:30:35.560281 2641 log.go:172] (0xc000562780) (3) Data frame handling\nI0309 09:30:35.560294 2641 log.go:172] (0xc000562780) (3) Data frame sent\nI0309 09:30:35.560303 2641 log.go:172] (0xc000bd9810) Data frame received for 3\nI0309 09:30:35.560309 2641 log.go:172] (0xc000562780) (3) Data frame handling\nI0309 09:30:35.560337 2641 log.go:172] (0xc000bd9810) Data frame received for 5\nI0309 09:30:35.560348 2641 log.go:172] (0xc00078f540) (5) Data frame handling\nI0309 09:30:35.560364 2641 log.go:172] (0xc00078f540) (5) Data frame sent\nI0309 09:30:35.560422 2641 log.go:172] (0xc000bd9810) Data frame received for 5\nI0309 09:30:35.560431 2641 log.go:172] (0xc00078f540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0309 09:30:35.561662 2641 log.go:172] (0xc000bd9810) Data frame received for 1\nI0309 09:30:35.561691 2641 log.go:172] (0xc000b4c8c0) (1) Data frame handling\nI0309 09:30:35.561701 2641 log.go:172] (0xc000b4c8c0) (1) Data frame sent\nI0309 09:30:35.561714 2641 log.go:172] (0xc000bd9810) (0xc000b4c8c0) Stream removed, broadcasting: 1\nI0309 09:30:35.561732 2641 log.go:172] (0xc000bd9810) Go away received\nI0309 09:30:35.562026 2641 log.go:172] (0xc000bd9810) (0xc000b4c8c0) Stream removed, broadcasting: 1\nI0309 09:30:35.562042 2641 log.go:172] (0xc000bd9810) (0xc000562780) Stream removed, broadcasting: 3\nI0309 09:30:35.562047 2641 log.go:172] (0xc000bd9810) (0xc00078f540) Stream removed, broadcasting: 5\n" Mar 9 09:30:35.564: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 9 09:30:35.564: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 9 09:30:35.568: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 9 09:30:45.573: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 9 09:30:45.573: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 9 09:30:45.573: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 9 09:30:45.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-574 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 09:30:45.795: INFO: stderr: "I0309 09:30:45.713863 2660 log.go:172] (0xc000b05600) (0xc000b12820) Create stream\nI0309 09:30:45.713906 2660 log.go:172] (0xc000b05600) (0xc000b12820) Stream added, broadcasting: 1\nI0309 09:30:45.715933 2660 log.go:172] (0xc000b05600) Reply frame received for 1\nI0309 09:30:45.715964 2660 log.go:172] (0xc000b05600) (0xc000a205a0) Create stream\nI0309 09:30:45.715977 2660 log.go:172] (0xc000b05600) (0xc000a205a0) Stream added, broadcasting: 3\nI0309 09:30:45.716807 2660 log.go:172] (0xc000b05600) Reply frame received for 3\nI0309 09:30:45.716829 2660 log.go:172] (0xc000b05600) (0xc000b128c0) Create stream\nI0309 09:30:45.716835 2660 log.go:172] (0xc000b05600) (0xc000b128c0) Stream added, broadcasting: 5\nI0309 09:30:45.717554 2660 log.go:172] (0xc000b05600) Reply frame received for 5\nI0309 09:30:45.789186 2660 log.go:172] (0xc000b05600) Data frame received for 5\nI0309 09:30:45.789272 2660 log.go:172] (0xc000b128c0) (5) Data frame handling\nI0309 09:30:45.789295 2660 log.go:172] (0xc000b128c0) (5) Data frame sent\nI0309 09:30:45.789310 2660 log.go:172] (0xc000b05600) Data frame received for 5\nI0309 09:30:45.789320 2660 log.go:172] (0xc000b128c0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:30:45.789365 2660 log.go:172] (0xc000b05600) Data frame received for 3\nI0309 09:30:45.789410 2660 log.go:172] (0xc000a205a0) (3) Data frame handling\nI0309 09:30:45.789445 2660 log.go:172] (0xc000a205a0) (3) Data frame sent\nI0309 09:30:45.789468 2660 log.go:172] (0xc000b05600) Data frame received for 3\nI0309 09:30:45.789486 2660 log.go:172] (0xc000a205a0) (3) Data frame handling\nI0309 09:30:45.791170 2660 log.go:172] (0xc000b05600) Data frame received for 1\nI0309 09:30:45.791188 2660 log.go:172] (0xc000b12820) (1) Data frame handling\nI0309 09:30:45.791203 2660 log.go:172] (0xc000b12820) (1) Data frame sent\nI0309 09:30:45.791789 2660 log.go:172] (0xc000b05600) (0xc000b12820) Stream removed, broadcasting: 1\nI0309 09:30:45.791941 2660 log.go:172] (0xc000b05600) Go away received\nI0309 09:30:45.792078 2660 log.go:172] (0xc000b05600) (0xc000b12820) Stream removed, broadcasting: 1\nI0309 09:30:45.792094 2660 log.go:172] (0xc000b05600) (0xc000a205a0) Stream removed, broadcasting: 3\nI0309 09:30:45.792103 2660 log.go:172] (0xc000b05600) (0xc000b128c0) Stream removed, broadcasting: 5\n" Mar 9 09:30:45.795: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 09:30:45.795: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 09:30:45.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-574 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 09:30:46.026: INFO: stderr: "I0309 09:30:45.905945 2680 log.go:172] (0xc0009e3080) (0xc000ad6460) Create stream\nI0309 09:30:45.905983 2680 log.go:172] (0xc0009e3080) (0xc000ad6460) Stream added, broadcasting: 1\nI0309 09:30:45.908506 2680 log.go:172] (0xc0009e3080) Reply frame received for 1\nI0309 09:30:45.908553 2680 log.go:172] (0xc0009e3080) (0xc000a360a0) Create stream\nI0309 09:30:45.908566 2680 log.go:172] (0xc0009e3080) (0xc000a360a0) Stream added, broadcasting: 3\nI0309 09:30:45.909332 2680 log.go:172] (0xc0009e3080) Reply frame received for 3\nI0309 09:30:45.909357 2680 log.go:172] (0xc0009e3080) (0xc000ad6500) Create stream\nI0309 09:30:45.909366 2680 log.go:172] (0xc0009e3080) (0xc000ad6500) Stream added, broadcasting: 5\nI0309 09:30:45.910110 2680 log.go:172] (0xc0009e3080) Reply frame received for 5\nI0309 09:30:45.976527 2680 log.go:172] (0xc0009e3080) Data frame received for 5\nI0309 09:30:45.976550 2680 log.go:172] (0xc000ad6500) (5) Data frame handling\nI0309 09:30:45.976565 2680 log.go:172] (0xc000ad6500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:30:46.021333 2680 log.go:172] (0xc0009e3080) Data frame received for 3\nI0309 09:30:46.021358 2680 log.go:172] (0xc000a360a0) (3) Data frame handling\nI0309 09:30:46.021371 2680 log.go:172] (0xc000a360a0) (3) Data frame sent\nI0309 09:30:46.021382 2680 log.go:172] (0xc0009e3080) Data frame received for 3\nI0309 09:30:46.021392 2680 log.go:172] (0xc000a360a0) (3) Data frame handling\nI0309 09:30:46.021807 2680 log.go:172] (0xc0009e3080) Data frame received for 5\nI0309 09:30:46.021831 2680 log.go:172] (0xc000ad6500) (5) Data frame handling\nI0309 09:30:46.023165 2680 log.go:172] (0xc0009e3080) Data frame received for 1\nI0309 09:30:46.023184 2680 log.go:172] (0xc000ad6460) (1) Data frame handling\nI0309 09:30:46.023200 2680 log.go:172] (0xc000ad6460) (1) Data frame sent\nI0309 09:30:46.023218 2680 log.go:172] (0xc0009e3080) (0xc000ad6460) Stream removed, broadcasting: 1\nI0309 09:30:46.023382 2680 log.go:172] (0xc0009e3080) Go away received\nI0309 09:30:46.023499 2680 log.go:172] (0xc0009e3080) (0xc000ad6460) Stream removed, broadcasting: 1\nI0309 09:30:46.023515 2680 log.go:172] (0xc0009e3080) (0xc000a360a0) Stream removed, broadcasting: 3\nI0309 09:30:46.023523 2680 log.go:172] (0xc0009e3080) (0xc000ad6500) Stream removed, broadcasting: 5\n" Mar 9 09:30:46.026: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 09:30:46.026: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 09:30:46.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-574 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 9 09:30:46.249: INFO: stderr: "I0309 09:30:46.156842 2700 log.go:172] (0xc000b040b0) (0xc0006a5b80) Create stream\nI0309 09:30:46.156889 2700 log.go:172] (0xc000b040b0) (0xc0006a5b80) Stream added, broadcasting: 1\nI0309 09:30:46.159733 2700 log.go:172] (0xc000b040b0) Reply frame received for 1\nI0309 09:30:46.159769 2700 log.go:172] (0xc000b040b0) (0xc0006a5d60) Create stream\nI0309 09:30:46.159777 2700 log.go:172] (0xc000b040b0) (0xc0006a5d60) Stream added, broadcasting: 3\nI0309 09:30:46.160706 2700 log.go:172] (0xc000b040b0) Reply frame received for 3\nI0309 09:30:46.160746 2700 log.go:172] (0xc000b040b0) (0xc0006a5e00) Create stream\nI0309 09:30:46.160763 2700 log.go:172] (0xc000b040b0) (0xc0006a5e00) Stream added, broadcasting: 5\nI0309 09:30:46.161551 2700 log.go:172] (0xc000b040b0) Reply frame received for 5\nI0309 09:30:46.224512 2700 log.go:172] (0xc000b040b0) Data frame received for 5\nI0309 09:30:46.224531 2700 log.go:172] (0xc0006a5e00) (5) Data frame handling\nI0309 09:30:46.224543 2700 log.go:172] (0xc0006a5e00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:30:46.244939 2700 log.go:172] (0xc000b040b0) Data frame received for 3\nI0309 09:30:46.244960 2700 log.go:172] (0xc0006a5d60) (3) Data frame handling\nI0309 09:30:46.244976 2700 log.go:172] (0xc0006a5d60) (3) Data frame sent\nI0309 09:30:46.244987 2700 log.go:172] (0xc000b040b0) Data frame received for 3\nI0309 09:30:46.244997 2700 log.go:172] (0xc0006a5d60) (3) Data frame handling\nI0309 09:30:46.245156 2700 log.go:172] (0xc000b040b0) Data frame received for 5\nI0309 09:30:46.245177 2700 log.go:172] (0xc0006a5e00) (5) Data frame handling\nI0309 09:30:46.246550 2700 log.go:172] (0xc000b040b0) Data frame received for 1\nI0309 09:30:46.246588 2700 log.go:172] (0xc0006a5b80) (1) Data frame handling\nI0309 09:30:46.246613 2700 log.go:172] (0xc0006a5b80) (1) Data frame sent\nI0309 09:30:46.246630 2700 log.go:172] (0xc000b040b0) (0xc0006a5b80) Stream removed, broadcasting: 1\nI0309 09:30:46.246666 2700 log.go:172] (0xc000b040b0) Go away received\nI0309 09:30:46.247035 2700 log.go:172] (0xc000b040b0) (0xc0006a5b80) Stream removed, broadcasting: 1\nI0309 09:30:46.247048 2700 log.go:172] (0xc000b040b0) (0xc0006a5d60) Stream removed, broadcasting: 3\nI0309 09:30:46.247054 2700 log.go:172] (0xc000b040b0) (0xc0006a5e00) Stream removed, broadcasting: 5\n" Mar 9 09:30:46.249: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 9 09:30:46.249: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 9 09:30:46.249: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 09:30:46.252: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 9 09:30:56.260: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 9 09:30:56.260: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 9 09:30:56.260: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 9 09:30:56.276: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 09:30:56.276: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:04 +0000 UTC }] Mar 9 09:30:56.277: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:30:56.277: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:30:56.277: INFO: Mar 9 09:30:56.277: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 09:30:57.281: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 09:30:57.281: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:04 +0000 UTC }] Mar 9 09:30:57.281: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:30:57.281: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:30:57.281: INFO: Mar 9 09:30:57.281: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 09:30:58.298: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 09:30:58.298: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:04 +0000 UTC }] Mar 9 09:30:58.298: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:30:58.298: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:30:58.298: INFO: Mar 9 09:30:58.298: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 9 09:30:59.302: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 09:30:59.302: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:30:59.302: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:30:59.302: INFO: Mar 9 09:30:59.302: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 9 09:31:00.306: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 09:31:00.306: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:00.306: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:00.306: INFO: Mar 9 09:31:00.306: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 9 09:31:01.310: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 09:31:01.311: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:01.311: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:01.311: INFO: Mar 9 09:31:01.311: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 9 09:31:02.315: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 09:31:02.315: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:02.315: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:02.315: INFO: Mar 9 09:31:02.315: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 9 09:31:03.319: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 09:31:03.319: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:03.319: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:03.319: INFO: Mar 9 09:31:03.319: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 9 09:31:04.323: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 09:31:04.323: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:04.323: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:04.323: INFO: Mar 9 09:31:04.323: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 9 09:31:05.328: INFO: POD NODE PHASE GRACE CONDITIONS Mar 9 09:31:05.328: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:05.328: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-09 09:30:24 +0000 UTC }] Mar 9 09:31:05.328: INFO: Mar 9 09:31:05.328: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-574 Mar 9 09:31:06.332: INFO: Scaling statefulset ss to 0 Mar 9 09:31:06.339: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 9 09:31:06.341: INFO: Deleting all statefulset in ns statefulset-574 Mar 9 09:31:06.343: INFO: Scaling statefulset ss to 0 Mar 9 09:31:06.349: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 09:31:06.351: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:31:06.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-574" for this suite. • [SLOW TEST:62.487 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":196,"skipped":3148,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:31:06.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3599.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3599.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 09:31:10.536: INFO: DNS probes using dns-3599/dns-test-1f5bfee5-f2a9-4000-aa69-b3fe3141f104 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:31:10.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3599" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":197,"skipped":3165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:31:10.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:31:11.193: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:31:14.235: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:31:14.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:31:15.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3227" for this suite. STEP: Destroying namespace "webhook-3227-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":198,"skipped":3199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:31:15.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:31:17.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-244" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3263,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:31:17.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-9130 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9130 STEP: Deleting pre-stop pod Mar 9 09:31:26.997: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:31:27.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9130" for this suite. • [SLOW TEST:9.200 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":200,"skipped":3266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:31:27.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 9 09:31:27.111: INFO: >>> kubeConfig: /root/.kube/config Mar 9 09:31:29.924: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:31:40.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3948" for this suite. • [SLOW TEST:13.195 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":201,"skipped":3289,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:31:40.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1733 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 9 09:31:40.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-4499' Mar 9 09:31:40.356: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 9 09:31:40.356: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1738 Mar 9 09:31:44.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4499' Mar 9 09:31:44.482: INFO: stderr: "" Mar 9 09:31:44.482: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:31:44.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4499" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":202,"skipped":3294,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:31:44.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-8a9b7722-84f6-4377-a24d-4f845d41a554 STEP: Creating a pod to test consume secrets Mar 9 09:31:44.571: INFO: Waiting up to 5m0s for pod "pod-secrets-0c38d522-4642-4bf3-9c1a-c2134129dd0c" in namespace "secrets-5883" to be "success or failure" Mar 9 09:31:44.576: INFO: Pod "pod-secrets-0c38d522-4642-4bf3-9c1a-c2134129dd0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255533ms Mar 9 09:31:46.596: INFO: Pod "pod-secrets-0c38d522-4642-4bf3-9c1a-c2134129dd0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025046334s Mar 9 09:31:48.600: INFO: Pod "pod-secrets-0c38d522-4642-4bf3-9c1a-c2134129dd0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0289735s STEP: Saw pod success Mar 9 09:31:48.600: INFO: Pod "pod-secrets-0c38d522-4642-4bf3-9c1a-c2134129dd0c" satisfied condition "success or failure" Mar 9 09:31:48.603: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-0c38d522-4642-4bf3-9c1a-c2134129dd0c container secret-env-test: STEP: delete the pod Mar 9 09:31:48.644: INFO: Waiting for pod pod-secrets-0c38d522-4642-4bf3-9c1a-c2134129dd0c to disappear Mar 9 09:31:48.654: INFO: Pod pod-secrets-0c38d522-4642-4bf3-9c1a-c2134129dd0c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:31:48.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5883" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3296,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:31:48.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-bdb1a710-e4a9-4092-8cbd-e767cb938b55 STEP: Creating a pod to test consume secrets Mar 9 09:31:48.727: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fc215701-97be-4985-b351-3c00a80f7b93" in namespace "projected-6046" to be "success or failure" Mar 9 09:31:48.733: INFO: Pod "pod-projected-secrets-fc215701-97be-4985-b351-3c00a80f7b93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142257ms Mar 9 09:31:50.736: INFO: Pod "pod-projected-secrets-fc215701-97be-4985-b351-3c00a80f7b93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009572597s STEP: Saw pod success Mar 9 09:31:50.736: INFO: Pod "pod-projected-secrets-fc215701-97be-4985-b351-3c00a80f7b93" satisfied condition "success or failure" Mar 9 09:31:50.739: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-fc215701-97be-4985-b351-3c00a80f7b93 container projected-secret-volume-test: STEP: delete the pod Mar 9 09:31:50.776: INFO: Waiting for pod pod-projected-secrets-fc215701-97be-4985-b351-3c00a80f7b93 to disappear Mar 9 09:31:50.781: INFO: Pod pod-projected-secrets-fc215701-97be-4985-b351-3c00a80f7b93 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:31:50.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6046" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3318,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:31:50.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 9 09:31:50.855: INFO: Waiting up to 5m0s for pod "var-expansion-0857a078-12af-4de3-ad7a-0ece24f01820" in namespace "var-expansion-2298" to be "success or failure" Mar 9 09:31:50.865: INFO: Pod "var-expansion-0857a078-12af-4de3-ad7a-0ece24f01820": Phase="Pending", Reason="", readiness=false. Elapsed: 9.820658ms Mar 9 09:31:52.869: INFO: Pod "var-expansion-0857a078-12af-4de3-ad7a-0ece24f01820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013438642s STEP: Saw pod success Mar 9 09:31:52.869: INFO: Pod "var-expansion-0857a078-12af-4de3-ad7a-0ece24f01820" satisfied condition "success or failure" Mar 9 09:31:52.871: INFO: Trying to get logs from node jerma-worker pod var-expansion-0857a078-12af-4de3-ad7a-0ece24f01820 container dapi-container: STEP: delete the pod Mar 9 09:31:52.923: INFO: Waiting for pod var-expansion-0857a078-12af-4de3-ad7a-0ece24f01820 to disappear Mar 9 09:31:52.930: INFO: Pod var-expansion-0857a078-12af-4de3-ad7a-0ece24f01820 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:31:52.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2298" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:31:52.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 9 09:31:53.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8485' Mar 9 09:31:53.328: INFO: stderr: "" Mar 9 09:31:53.328: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 09:31:53.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8485' Mar 9 09:31:53.439: INFO: stderr: "" Mar 9 09:31:53.439: INFO: stdout: "update-demo-nautilus-kgxxm update-demo-nautilus-whxsn " Mar 9 09:31:53.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kgxxm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8485' Mar 9 09:31:53.525: INFO: stderr: "" Mar 9 09:31:53.525: INFO: stdout: "" Mar 9 09:31:53.525: INFO: update-demo-nautilus-kgxxm is created but not running Mar 9 09:31:58.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8485' Mar 9 09:31:58.629: INFO: stderr: "" Mar 9 09:31:58.629: INFO: stdout: "update-demo-nautilus-kgxxm update-demo-nautilus-whxsn " Mar 9 09:31:58.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kgxxm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8485' Mar 9 09:31:58.720: INFO: stderr: "" Mar 9 09:31:58.720: INFO: stdout: "true" Mar 9 09:31:58.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kgxxm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8485' Mar 9 09:31:58.810: INFO: stderr: "" Mar 9 09:31:58.810: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 09:31:58.810: INFO: validating pod update-demo-nautilus-kgxxm Mar 9 09:31:58.814: INFO: got data: { "image": "nautilus.jpg" } Mar 9 09:31:58.814: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 09:31:58.814: INFO: update-demo-nautilus-kgxxm is verified up and running Mar 9 09:31:58.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-whxsn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8485' Mar 9 09:31:58.896: INFO: stderr: "" Mar 9 09:31:58.896: INFO: stdout: "true" Mar 9 09:31:58.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-whxsn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8485' Mar 9 09:31:58.965: INFO: stderr: "" Mar 9 09:31:58.965: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 9 09:31:58.965: INFO: validating pod update-demo-nautilus-whxsn Mar 9 09:31:58.968: INFO: got data: { "image": "nautilus.jpg" } Mar 9 09:31:58.968: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 9 09:31:58.968: INFO: update-demo-nautilus-whxsn is verified up and running STEP: rolling-update to new replication controller Mar 9 09:31:58.970: INFO: scanned /root for discovery docs: Mar 9 09:31:58.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8485' Mar 9 09:32:21.482: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 9 09:32:21.482: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 9 09:32:21.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8485' Mar 9 09:32:21.591: INFO: stderr: "" Mar 9 09:32:21.591: INFO: stdout: "update-demo-kitten-5tnrd update-demo-kitten-8t4lq " Mar 9 09:32:21.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5tnrd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8485' Mar 9 09:32:21.686: INFO: stderr: "" Mar 9 09:32:21.686: INFO: stdout: "true" Mar 9 09:32:21.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5tnrd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8485' Mar 9 09:32:21.773: INFO: stderr: "" Mar 9 09:32:21.773: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 9 09:32:21.773: INFO: validating pod update-demo-kitten-5tnrd Mar 9 09:32:21.777: INFO: got data: { "image": "kitten.jpg" } Mar 9 09:32:21.777: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 9 09:32:21.777: INFO: update-demo-kitten-5tnrd is verified up and running Mar 9 09:32:21.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8t4lq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8485' Mar 9 09:32:21.843: INFO: stderr: "" Mar 9 09:32:21.843: INFO: stdout: "true" Mar 9 09:32:21.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8t4lq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8485' Mar 9 09:32:21.924: INFO: stderr: "" Mar 9 09:32:21.924: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 9 09:32:21.924: INFO: validating pod update-demo-kitten-8t4lq Mar 9 09:32:21.927: INFO: got data: { "image": "kitten.jpg" } Mar 9 09:32:21.927: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 9 09:32:21.927: INFO: update-demo-kitten-8t4lq is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:32:21.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8485" for this suite. • [SLOW TEST:28.992 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":206,"skipped":3350,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:32:21.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 9 09:32:22.040: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:32:25.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7484" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":207,"skipped":3351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:32:25.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:32:26.476: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:32:29.506: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:32:29.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1900" for this suite. STEP: Destroying namespace "webhook-1900-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":208,"skipped":3379,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:32:29.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:32:29.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-268fef37-7749-4590-b2de-693906f22eb3" in namespace "projected-3485" to be "success or failure" Mar 9 09:32:29.873: INFO: Pod "downwardapi-volume-268fef37-7749-4590-b2de-693906f22eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 46.606154ms Mar 9 09:32:31.877: INFO: Pod "downwardapi-volume-268fef37-7749-4590-b2de-693906f22eb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.050537667s STEP: Saw pod success Mar 9 09:32:31.877: INFO: Pod "downwardapi-volume-268fef37-7749-4590-b2de-693906f22eb3" satisfied condition "success or failure" Mar 9 09:32:31.879: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-268fef37-7749-4590-b2de-693906f22eb3 container client-container: STEP: delete the pod Mar 9 09:32:31.922: INFO: Waiting for pod downwardapi-volume-268fef37-7749-4590-b2de-693906f22eb3 to disappear Mar 9 09:32:31.931: INFO: Pod downwardapi-volume-268fef37-7749-4590-b2de-693906f22eb3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:32:31.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3485" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3381,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:32:31.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5309.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5309.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5309.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5309.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5309.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5309.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 9 09:32:36.094: INFO: DNS probes using dns-5309/dns-test-6f1ef26f-fb53-454c-9a6e-9f8dd61b1921 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:32:36.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5309" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":210,"skipped":3389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:32:36.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 9 09:32:36.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3981' Mar 9 09:32:36.318: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 9 09:32:36.318: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602 Mar 9 09:32:38.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3981' Mar 9 09:32:38.522: INFO: stderr: "" Mar 9 09:32:38.522: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:32:38.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3981" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":211,"skipped":3427,"failed":0} SS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:32:38.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-2bab9e15-9399-41da-9933-a81a21761a7e [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:32:38.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9365" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":212,"skipped":3429,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:32:38.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 9 09:32:41.751: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:32:41.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7281" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3432,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:32:41.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 9 09:32:44.409: INFO: Successfully updated pod "adopt-release-j7qn8" STEP: Checking that the Job readopts the Pod Mar 9 09:32:44.409: INFO: Waiting up to 15m0s for pod "adopt-release-j7qn8" in namespace "job-1418" to be "adopted" Mar 9 09:32:44.447: INFO: Pod "adopt-release-j7qn8": Phase="Running", Reason="", readiness=true. Elapsed: 37.961051ms Mar 9 09:32:46.450: INFO: Pod "adopt-release-j7qn8": Phase="Running", Reason="", readiness=true. Elapsed: 2.041619865s Mar 9 09:32:46.450: INFO: Pod "adopt-release-j7qn8" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 9 09:32:46.957: INFO: Successfully updated pod "adopt-release-j7qn8" STEP: Checking that the Job releases the Pod Mar 9 09:32:46.958: INFO: Waiting up to 15m0s for pod "adopt-release-j7qn8" in namespace "job-1418" to be "released" Mar 9 09:32:46.992: INFO: Pod "adopt-release-j7qn8": Phase="Running", Reason="", readiness=true. Elapsed: 34.633029ms Mar 9 09:32:46.992: INFO: Pod "adopt-release-j7qn8" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:32:46.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1418" for this suite. • [SLOW TEST:5.238 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":214,"skipped":3439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:32:47.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-8f145c2e-d7f8-4eb3-adc8-877b3836c157 STEP: Creating secret with name s-test-opt-upd-2f3b53e8-6869-4b8e-b512-19cef2b00bda STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8f145c2e-d7f8-4eb3-adc8-877b3836c157 STEP: Updating secret s-test-opt-upd-2f3b53e8-6869-4b8e-b512-19cef2b00bda STEP: Creating secret with name s-test-opt-create-4c5e94d6-67f8-4d6a-83ab-df345d11f8d0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:34:19.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7271" for this suite. • [SLOW TEST:92.738 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3490,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:34:19.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 STEP: creating the pod Mar 9 09:34:19.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4319' Mar 9 09:34:20.143: INFO: stderr: "" Mar 9 09:34:20.143: INFO: stdout: "pod/pause created\n" Mar 9 09:34:20.143: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 9 09:34:20.143: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4319" to be "running and ready" Mar 9 09:34:20.177: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 34.55954ms Mar 9 09:34:22.181: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.038577411s Mar 9 09:34:22.181: INFO: Pod "pause" satisfied condition "running and ready" Mar 9 09:34:22.181: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 9 09:34:22.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4319' Mar 9 09:34:22.303: INFO: stderr: "" Mar 9 09:34:22.303: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 9 09:34:22.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4319' Mar 9 09:34:22.400: INFO: stderr: "" Mar 9 09:34:22.400: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 9 09:34:22.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4319' Mar 9 09:34:22.490: INFO: stderr: "" Mar 9 09:34:22.490: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 9 09:34:22.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4319' Mar 9 09:34:22.564: INFO: stderr: "" Mar 9 09:34:22.564: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 STEP: using delete to clean up resources Mar 9 09:34:22.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4319' Mar 9 09:34:22.691: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 9 09:34:22.691: INFO: stdout: "pod \"pause\" force deleted\n" Mar 9 09:34:22.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4319' Mar 9 09:34:22.785: INFO: stderr: "No resources found in kubectl-4319 namespace.\n" Mar 9 09:34:22.785: INFO: stdout: "" Mar 9 09:34:22.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4319 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 9 09:34:22.852: INFO: stderr: "" Mar 9 09:34:22.852: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:34:22.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4319" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":216,"skipped":3503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:34:22.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 9 09:34:23.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:34:23.025: INFO: Number of nodes with available pods: 0 Mar 9 09:34:23.025: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:34:24.038: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:34:24.041: INFO: Number of nodes with available pods: 0 Mar 9 09:34:24.041: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:34:25.035: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:34:25.039: INFO: Number of nodes with available pods: 2 Mar 9 09:34:25.039: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 9 09:34:25.062: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 9 09:34:25.068: INFO: Number of nodes with available pods: 2 Mar 9 09:34:25.068: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3425, will wait for the garbage collector to delete the pods Mar 9 09:34:26.167: INFO: Deleting DaemonSet.extensions daemon-set took: 5.432411ms Mar 9 09:34:26.567: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.279457ms Mar 9 09:35:56.191: INFO: Number of nodes with available pods: 0 Mar 9 09:35:56.191: INFO: Number of running nodes: 0, number of available pods: 0 Mar 9 09:35:56.194: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3425/daemonsets","resourceVersion":"276307"},"items":null} Mar 9 09:35:56.197: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3425/pods","resourceVersion":"276307"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:35:56.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3425" for this suite. • [SLOW TEST:93.364 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":217,"skipped":3539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:35:56.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:35:56.274: INFO: Waiting up to 5m0s for pod "busybox-user-65534-5718f4b4-a49f-427a-bfae-b2695559f21c" in namespace "security-context-test-9725" to be "success or failure" Mar 9 09:35:56.277: INFO: Pod "busybox-user-65534-5718f4b4-a49f-427a-bfae-b2695559f21c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.974986ms Mar 9 09:35:58.280: INFO: Pod "busybox-user-65534-5718f4b4-a49f-427a-bfae-b2695559f21c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00644742s Mar 9 09:35:58.280: INFO: Pod "busybox-user-65534-5718f4b4-a49f-427a-bfae-b2695559f21c" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:35:58.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9725" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3592,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:35:58.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-bhk8 STEP: Creating a pod to test atomic-volume-subpath Mar 9 09:35:58.413: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bhk8" in namespace "subpath-8398" to be "success or failure" Mar 9 09:35:58.417: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.687155ms Mar 9 09:36:00.421: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Running", Reason="", readiness=true. Elapsed: 2.008522633s Mar 9 09:36:02.425: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Running", Reason="", readiness=true. Elapsed: 4.012279335s Mar 9 09:36:04.428: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Running", Reason="", readiness=true. Elapsed: 6.015737234s Mar 9 09:36:06.443: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Running", Reason="", readiness=true. Elapsed: 8.030401876s Mar 9 09:36:08.449: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Running", Reason="", readiness=true. Elapsed: 10.036171686s Mar 9 09:36:10.455: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Running", Reason="", readiness=true. Elapsed: 12.042273772s Mar 9 09:36:12.459: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Running", Reason="", readiness=true. Elapsed: 14.046228487s Mar 9 09:36:14.469: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Running", Reason="", readiness=true. Elapsed: 16.056268256s Mar 9 09:36:16.475: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Running", Reason="", readiness=true. Elapsed: 18.062662761s Mar 9 09:36:18.479: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Running", Reason="", readiness=true. Elapsed: 20.066843824s Mar 9 09:36:20.483: INFO: Pod "pod-subpath-test-configmap-bhk8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.070721986s STEP: Saw pod success Mar 9 09:36:20.483: INFO: Pod "pod-subpath-test-configmap-bhk8" satisfied condition "success or failure" Mar 9 09:36:20.486: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-bhk8 container test-container-subpath-configmap-bhk8: STEP: delete the pod Mar 9 09:36:20.513: INFO: Waiting for pod pod-subpath-test-configmap-bhk8 to disappear Mar 9 09:36:20.517: INFO: Pod pod-subpath-test-configmap-bhk8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-bhk8 Mar 9 09:36:20.517: INFO: Deleting pod "pod-subpath-test-configmap-bhk8" in namespace "subpath-8398" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:36:20.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8398" for this suite. • [SLOW TEST:22.240 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":219,"skipped":3595,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:36:20.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:36:49.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7109" for this suite. STEP: Destroying namespace "nsdeletetest-2885" for this suite. Mar 9 09:36:49.837: INFO: Namespace nsdeletetest-2885 was already deleted STEP: Destroying namespace "nsdeletetest-2377" for this suite. • [SLOW TEST:29.313 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":220,"skipped":3596,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:36:49.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:36:49.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a487c30-a2e5-4862-830a-a8042c594fd1" in namespace "downward-api-710" to be "success or failure" Mar 9 09:36:49.944: INFO: Pod "downwardapi-volume-8a487c30-a2e5-4862-830a-a8042c594fd1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.170638ms Mar 9 09:36:51.948: INFO: Pod "downwardapi-volume-8a487c30-a2e5-4862-830a-a8042c594fd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.036204042s STEP: Saw pod success Mar 9 09:36:51.948: INFO: Pod "downwardapi-volume-8a487c30-a2e5-4862-830a-a8042c594fd1" satisfied condition "success or failure" Mar 9 09:36:51.951: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8a487c30-a2e5-4862-830a-a8042c594fd1 container client-container: STEP: delete the pod Mar 9 09:36:51.977: INFO: Waiting for pod downwardapi-volume-8a487c30-a2e5-4862-830a-a8042c594fd1 to disappear Mar 9 09:36:51.988: INFO: Pod downwardapi-volume-8a487c30-a2e5-4862-830a-a8042c594fd1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:36:51.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-710" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3604,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:36:51.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:36:52.133: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aede4dd0-dda5-46b7-9a6b-116e8a0a703c" in namespace "projected-7850" to be "success or failure" Mar 9 09:36:52.200: INFO: Pod "downwardapi-volume-aede4dd0-dda5-46b7-9a6b-116e8a0a703c": Phase="Pending", Reason="", readiness=false. Elapsed: 66.977821ms Mar 9 09:36:54.204: INFO: Pod "downwardapi-volume-aede4dd0-dda5-46b7-9a6b-116e8a0a703c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071065934s Mar 9 09:36:56.209: INFO: Pod "downwardapi-volume-aede4dd0-dda5-46b7-9a6b-116e8a0a703c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075244683s STEP: Saw pod success Mar 9 09:36:56.209: INFO: Pod "downwardapi-volume-aede4dd0-dda5-46b7-9a6b-116e8a0a703c" satisfied condition "success or failure" Mar 9 09:36:56.212: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-aede4dd0-dda5-46b7-9a6b-116e8a0a703c container client-container: STEP: delete the pod Mar 9 09:36:56.266: INFO: Waiting for pod downwardapi-volume-aede4dd0-dda5-46b7-9a6b-116e8a0a703c to disappear Mar 9 09:36:56.272: INFO: Pod downwardapi-volume-aede4dd0-dda5-46b7-9a6b-116e8a0a703c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:36:56.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7850" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3651,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:36:56.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-7889/configmap-test-a642237f-1a0b-4614-b19b-1383212d7ec7 STEP: Creating a pod to test consume configMaps Mar 9 09:36:56.358: INFO: Waiting up to 5m0s for pod "pod-configmaps-d69703f0-bf00-4a96-a97e-f79eeeedf6ad" in namespace "configmap-7889" to be "success or failure" Mar 9 09:36:56.407: INFO: Pod "pod-configmaps-d69703f0-bf00-4a96-a97e-f79eeeedf6ad": Phase="Pending", Reason="", readiness=false. Elapsed: 48.744661ms Mar 9 09:36:58.410: INFO: Pod "pod-configmaps-d69703f0-bf00-4a96-a97e-f79eeeedf6ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.051489368s STEP: Saw pod success Mar 9 09:36:58.410: INFO: Pod "pod-configmaps-d69703f0-bf00-4a96-a97e-f79eeeedf6ad" satisfied condition "success or failure" Mar 9 09:36:58.412: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-d69703f0-bf00-4a96-a97e-f79eeeedf6ad container env-test: STEP: delete the pod Mar 9 09:36:58.468: INFO: Waiting for pod pod-configmaps-d69703f0-bf00-4a96-a97e-f79eeeedf6ad to disappear Mar 9 09:36:58.481: INFO: Pod pod-configmaps-d69703f0-bf00-4a96-a97e-f79eeeedf6ad no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:36:58.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7889" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:36:58.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 9 09:36:59.368: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 9 09:37:01.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719343419, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719343419, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719343419, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719343419, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:37:04.438: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:37:04.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:37:05.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3714" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.209 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":224,"skipped":3695,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:37:05.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 9 09:37:05.781: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 9 09:37:05.797: INFO: Waiting for terminating namespaces to be deleted... Mar 9 09:37:05.799: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 9 09:37:05.818: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:37:05.818: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 09:37:05.818: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:37:05.818: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 09:37:05.818: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 9 09:37:05.822: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:37:05.822: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 09:37:05.822: INFO: sample-crd-conversion-webhook-deployment-78dcf5dd84-q4lcn from crd-webhook-3714 started at 2020-03-09 09:36:59 +0000 UTC (1 container statuses recorded) Mar 9 09:37:05.822: INFO: Container sample-crd-conversion-webhook ready: true, restart count 0 Mar 9 09:37:05.822: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:37:05.822: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-dd0a0f85-1a4f-4bbe-a412-e447bfa15ca3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-dd0a0f85-1a4f-4bbe-a412-e447bfa15ca3 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-dd0a0f85-1a4f-4bbe-a412-e447bfa15ca3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:37:10.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3250" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":225,"skipped":3706,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:37:10.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 9 09:37:13.205: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:37:14.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9139" for this suite. •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":226,"skipped":3723,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:37:14.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 9 09:37:14.303: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-a 72f4b0b5-f879-42c2-bcc3-b5071ce963f8 276854 0 2020-03-09 09:37:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 9 09:37:14.303: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-a 72f4b0b5-f879-42c2-bcc3-b5071ce963f8 276854 0 2020-03-09 09:37:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 9 09:37:24.318: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-a 72f4b0b5-f879-42c2-bcc3-b5071ce963f8 276917 0 2020-03-09 09:37:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 9 09:37:24.318: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-a 72f4b0b5-f879-42c2-bcc3-b5071ce963f8 276917 0 2020-03-09 09:37:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 9 09:37:34.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-a 72f4b0b5-f879-42c2-bcc3-b5071ce963f8 276953 0 2020-03-09 09:37:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 9 09:37:34.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-a 72f4b0b5-f879-42c2-bcc3-b5071ce963f8 276953 0 2020-03-09 09:37:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 9 09:37:44.356: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-a 72f4b0b5-f879-42c2-bcc3-b5071ce963f8 276985 0 2020-03-09 09:37:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 9 09:37:44.356: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-a 72f4b0b5-f879-42c2-bcc3-b5071ce963f8 276985 0 2020-03-09 09:37:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 9 09:37:54.362: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-b d4ffd68e-8040-4090-92b3-921ca34d6428 277016 0 2020-03-09 09:37:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 9 09:37:54.362: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-b d4ffd68e-8040-4090-92b3-921ca34d6428 277016 0 2020-03-09 09:37:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 9 09:38:04.367: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-b d4ffd68e-8040-4090-92b3-921ca34d6428 277041 0 2020-03-09 09:37:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 9 09:38:04.368: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2882 /api/v1/namespaces/watch-2882/configmaps/e2e-watch-test-configmap-b d4ffd68e-8040-4090-92b3-921ca34d6428 277041 0 2020-03-09 09:37:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:38:14.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2882" for this suite. • [SLOW TEST:60.137 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":227,"skipped":3731,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:38:14.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-d591b7e9-8f39-4eac-a77b-2900928529e9 STEP: Creating secret with name secret-projected-all-test-volume-57631b1e-ba6d-4c4c-8e79-aa57d21d184e STEP: Creating a pod to test Check all projections for projected volume plugin Mar 9 09:38:14.540: INFO: Waiting up to 5m0s for pod "projected-volume-998eed04-60cd-4a6c-8798-c4d57a99c100" in namespace "projected-3223" to be "success or failure" Mar 9 09:38:14.544: INFO: Pod "projected-volume-998eed04-60cd-4a6c-8798-c4d57a99c100": Phase="Pending", Reason="", readiness=false. Elapsed: 3.953655ms Mar 9 09:38:16.563: INFO: Pod "projected-volume-998eed04-60cd-4a6c-8798-c4d57a99c100": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023458752s STEP: Saw pod success Mar 9 09:38:16.563: INFO: Pod "projected-volume-998eed04-60cd-4a6c-8798-c4d57a99c100" satisfied condition "success or failure" Mar 9 09:38:16.565: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-998eed04-60cd-4a6c-8798-c4d57a99c100 container projected-all-volume-test: STEP: delete the pod Mar 9 09:38:16.596: INFO: Waiting for pod projected-volume-998eed04-60cd-4a6c-8798-c4d57a99c100 to disappear Mar 9 09:38:16.604: INFO: Pod projected-volume-998eed04-60cd-4a6c-8798-c4d57a99c100 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:38:16.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3223" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3738,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:38:16.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 9 09:38:16.645: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 9 09:38:16.684: INFO: Waiting for terminating namespaces to be deleted... Mar 9 09:38:16.686: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 9 09:38:16.690: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:38:16.690: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 09:38:16.690: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:38:16.690: INFO: Container kindnet-cni ready: true, restart count 0 Mar 9 09:38:16.690: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 9 09:38:16.693: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:38:16.693: INFO: Container kube-proxy ready: true, restart count 0 Mar 9 09:38:16.693: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 9 09:38:16.693: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-90353e9b-04cb-4f0c-80da-341ea2f76a2f 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-90353e9b-04cb-4f0c-80da-341ea2f76a2f off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-90353e9b-04cb-4f0c-80da-341ea2f76a2f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:38:26.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6424" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.282 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":229,"skipped":3756,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:38:26.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:38:29.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2514" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":230,"skipped":3759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:38:29.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 9 09:38:31.716: INFO: Successfully updated pod "labelsupdateecc1931c-b343-4309-9b91-d7e454435977" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:38:35.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2138" for this suite. • [SLOW TEST:6.680 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3815,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:38:35.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-b2c306e6-9421-4aa9-a1c0-7a8a54da0d47 STEP: Creating a pod to test consume secrets Mar 9 09:38:35.831: INFO: Waiting up to 5m0s for pod "pod-secrets-9f707c28-7260-49d0-8d1b-0b0282c2887f" in namespace "secrets-3710" to be "success or failure" Mar 9 09:38:35.849: INFO: Pod "pod-secrets-9f707c28-7260-49d0-8d1b-0b0282c2887f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.442721ms Mar 9 09:38:37.852: INFO: Pod "pod-secrets-9f707c28-7260-49d0-8d1b-0b0282c2887f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021108442s STEP: Saw pod success Mar 9 09:38:37.853: INFO: Pod "pod-secrets-9f707c28-7260-49d0-8d1b-0b0282c2887f" satisfied condition "success or failure" Mar 9 09:38:37.855: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-9f707c28-7260-49d0-8d1b-0b0282c2887f container secret-volume-test: STEP: delete the pod Mar 9 09:38:37.878: INFO: Waiting for pod pod-secrets-9f707c28-7260-49d0-8d1b-0b0282c2887f to disappear Mar 9 09:38:37.883: INFO: Pod pod-secrets-9f707c28-7260-49d0-8d1b-0b0282c2887f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:38:37.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3710" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3829,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:38:37.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-d56e7dcb-c6f3-4088-8a9d-40d8e6751cb7 in namespace container-probe-1535 Mar 9 09:38:39.995: INFO: Started pod busybox-d56e7dcb-c6f3-4088-8a9d-40d8e6751cb7 in namespace container-probe-1535 STEP: checking the pod's current state and verifying that restartCount is present Mar 9 09:38:39.998: INFO: Initial restart count of pod busybox-d56e7dcb-c6f3-4088-8a9d-40d8e6751cb7 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:42:40.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1535" for this suite. • [SLOW TEST:242.718 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3833,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:42:40.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 9 09:42:40.692: INFO: namespace kubectl-5395 Mar 9 09:42:40.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5395' Mar 9 09:42:42.535: INFO: stderr: "" Mar 9 09:42:42.535: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 9 09:42:43.540: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:42:43.540: INFO: Found 0 / 1 Mar 9 09:42:44.539: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:42:44.539: INFO: Found 0 / 1 Mar 9 09:42:45.540: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:42:45.540: INFO: Found 1 / 1 Mar 9 09:42:45.540: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 9 09:42:45.544: INFO: Selector matched 1 pods for map[app:agnhost] Mar 9 09:42:45.544: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 9 09:42:45.544: INFO: wait on agnhost-master startup in kubectl-5395 Mar 9 09:42:45.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-xr2wl agnhost-master --namespace=kubectl-5395' Mar 9 09:42:45.702: INFO: stderr: "" Mar 9 09:42:45.702: INFO: stdout: "Paused\n" STEP: exposing RC Mar 9 09:42:45.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5395' Mar 9 09:42:45.827: INFO: stderr: "" Mar 9 09:42:45.827: INFO: stdout: "service/rm2 exposed\n" Mar 9 09:42:45.837: INFO: Service rm2 in namespace kubectl-5395 found. STEP: exposing service Mar 9 09:42:47.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5395' Mar 9 09:42:48.003: INFO: stderr: "" Mar 9 09:42:48.003: INFO: stdout: "service/rm3 exposed\n" Mar 9 09:42:48.062: INFO: Service rm3 in namespace kubectl-5395 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:42:50.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5395" for this suite. • [SLOW TEST:9.468 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":234,"skipped":3871,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:42:50.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 9 09:42:50.169: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix871252441/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:42:50.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6257" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":235,"skipped":3905,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:42:50.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 9 09:42:50.295: INFO: >>> kubeConfig: /root/.kube/config Mar 9 09:42:52.104: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:43:00.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4423" for this suite. • [SLOW TEST:9.923 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":236,"skipped":3919,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:43:00.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:43:00.215: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-4988b914-1e84-4f44-aab0-27ff4c57d03b" in namespace "security-context-test-8066" to be "success or failure" Mar 9 09:43:00.219: INFO: Pod "alpine-nnp-false-4988b914-1e84-4f44-aab0-27ff4c57d03b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553266ms Mar 9 09:43:02.224: INFO: Pod "alpine-nnp-false-4988b914-1e84-4f44-aab0-27ff4c57d03b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008886883s Mar 9 09:43:02.224: INFO: Pod "alpine-nnp-false-4988b914-1e84-4f44-aab0-27ff4c57d03b" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:43:02.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8066" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3954,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:43:02.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:43:09.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6133" for this suite. • [SLOW TEST:7.100 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":238,"skipped":3957,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:43:09.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:43:09.399: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 9 09:43:09.412: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 9 09:43:14.449: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 9 09:43:14.450: INFO: Creating deployment "test-rolling-update-deployment" Mar 9 09:43:14.468: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 9 09:43:14.521: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 9 09:43:16.528: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 9 09:43:16.530: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 9 09:43:16.536: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2802 /apis/apps/v1/namespaces/deployment-2802/deployments/test-rolling-update-deployment 94ff06ae-3428-4633-b5eb-7b308a09bbf0 278324 1 2020-03-09 09:43:14 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e675a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-09 09:43:14 +0000 UTC,LastTransitionTime:2020-03-09 09:43:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-09 09:43:15 +0000 UTC,LastTransitionTime:2020-03-09 09:43:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 9 09:43:16.538: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2802 /apis/apps/v1/namespaces/deployment-2802/replicasets/test-rolling-update-deployment-67cf4f6444 68ad643c-e4b5-4dd9-9f2b-1779c1f20e92 278310 1 2020-03-09 09:43:14 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 94ff06ae-3428-4633-b5eb-7b308a09bbf0 0xc003e67bf7 0xc003e67bf8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e67cb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 9 09:43:16.538: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 9 09:43:16.538: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2802 /apis/apps/v1/namespaces/deployment-2802/replicasets/test-rolling-update-controller dd5a7df8-4580-48e7-97c3-23eb5b4fd392 278322 2 2020-03-09 09:43:09 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 94ff06ae-3428-4633-b5eb-7b308a09bbf0 0xc003e67ab7 0xc003e67ab8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003e67b78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 9 09:43:16.541: INFO: Pod "test-rolling-update-deployment-67cf4f6444-8pxnm" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-8pxnm test-rolling-update-deployment-67cf4f6444- deployment-2802 /api/v1/namespaces/deployment-2802/pods/test-rolling-update-deployment-67cf4f6444-8pxnm 83b0ffe2-1194-4494-b24c-aef14bf2fc38 278309 0 2020-03-09 09:43:14 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 68ad643c-e4b5-4dd9-9f2b-1779c1f20e92 0xc003dd03f7 0xc003dd03f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v228c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v228c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v228c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:43:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:43:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:43:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:43:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.124,StartTime:2020-03-09 09:43:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:43:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://30fdae566b846cfa7242a2a5af8b8a8ba2d0a559675e6d833edf1a54ace90437,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:43:16.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2802" for this suite. • [SLOW TEST:7.207 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":239,"skipped":3987,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:43:16.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 9 09:43:17.153: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:43:20.223: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:43:20.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:43:21.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3719" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:5.059 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":240,"skipped":4004,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:43:21.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 9 09:43:24.240: INFO: Successfully updated pod "pod-update-8e830566-1d73-469e-bfb1-30fc85b49c1b" STEP: verifying the updated pod is in kubernetes Mar 9 09:43:24.263: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:43:24.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1856" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:43:24.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8648 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8648 I0309 09:43:24.500747 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8648, replica count: 2 I0309 09:43:27.551134 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 9 09:43:27.551: INFO: Creating new exec pod Mar 9 09:43:30.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8648 execpodjlwlx -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 9 09:43:30.824: INFO: stderr: "I0309 09:43:30.739694 3362 log.go:172] (0xc0001042c0) (0xc0006e8780) Create stream\nI0309 09:43:30.739751 3362 log.go:172] (0xc0001042c0) (0xc0006e8780) Stream added, broadcasting: 1\nI0309 09:43:30.743311 3362 log.go:172] (0xc0001042c0) Reply frame received for 1\nI0309 09:43:30.743360 3362 log.go:172] (0xc0001042c0) (0xc000551540) Create stream\nI0309 09:43:30.743371 3362 log.go:172] (0xc0001042c0) (0xc000551540) Stream added, broadcasting: 3\nI0309 09:43:30.744948 3362 log.go:172] (0xc0001042c0) Reply frame received for 3\nI0309 09:43:30.745004 3362 log.go:172] (0xc0001042c0) (0xc0005515e0) Create stream\nI0309 09:43:30.745023 3362 log.go:172] (0xc0001042c0) (0xc0005515e0) Stream added, broadcasting: 5\nI0309 09:43:30.746697 3362 log.go:172] (0xc0001042c0) Reply frame received for 5\nI0309 09:43:30.817143 3362 log.go:172] (0xc0001042c0) Data frame received for 5\nI0309 09:43:30.817173 3362 log.go:172] (0xc0005515e0) (5) Data frame handling\nI0309 09:43:30.817192 3362 log.go:172] (0xc0005515e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0309 09:43:30.818329 3362 log.go:172] (0xc0001042c0) Data frame received for 5\nI0309 09:43:30.818342 3362 log.go:172] (0xc0005515e0) (5) Data frame handling\nI0309 09:43:30.818353 3362 log.go:172] (0xc0005515e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0309 09:43:30.818942 3362 log.go:172] (0xc0001042c0) Data frame received for 5\nI0309 09:43:30.818968 3362 log.go:172] (0xc0005515e0) (5) Data frame handling\nI0309 09:43:30.819205 3362 log.go:172] (0xc0001042c0) Data frame received for 3\nI0309 09:43:30.819223 3362 log.go:172] (0xc000551540) (3) Data frame handling\nI0309 09:43:30.820653 3362 log.go:172] (0xc0001042c0) Data frame received for 1\nI0309 09:43:30.820676 3362 log.go:172] (0xc0006e8780) (1) Data frame handling\nI0309 09:43:30.820690 3362 log.go:172] (0xc0006e8780) (1) Data frame sent\nI0309 09:43:30.820757 3362 log.go:172] (0xc0001042c0) (0xc0006e8780) Stream removed, broadcasting: 1\nI0309 09:43:30.820805 3362 log.go:172] (0xc0001042c0) Go away received\nI0309 09:43:30.821032 3362 log.go:172] (0xc0001042c0) (0xc0006e8780) Stream removed, broadcasting: 1\nI0309 09:43:30.821047 3362 log.go:172] (0xc0001042c0) (0xc000551540) Stream removed, broadcasting: 3\nI0309 09:43:30.821058 3362 log.go:172] (0xc0001042c0) (0xc0005515e0) Stream removed, broadcasting: 5\n" Mar 9 09:43:30.824: INFO: stdout: "" Mar 9 09:43:30.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8648 execpodjlwlx -- /bin/sh -x -c nc -zv -t -w 2 10.103.164.128 80' Mar 9 09:43:31.145: INFO: stderr: "I0309 09:43:31.079031 3384 log.go:172] (0xc00096c000) (0xc0007da000) Create stream\nI0309 09:43:31.079082 3384 log.go:172] (0xc00096c000) (0xc0007da000) Stream added, broadcasting: 1\nI0309 09:43:31.081806 3384 log.go:172] (0xc00096c000) Reply frame received for 1\nI0309 09:43:31.081840 3384 log.go:172] (0xc00096c000) (0xc000994000) Create stream\nI0309 09:43:31.081851 3384 log.go:172] (0xc00096c000) (0xc000994000) Stream added, broadcasting: 3\nI0309 09:43:31.082757 3384 log.go:172] (0xc00096c000) Reply frame received for 3\nI0309 09:43:31.082787 3384 log.go:172] (0xc00096c000) (0xc000683a40) Create stream\nI0309 09:43:31.082797 3384 log.go:172] (0xc00096c000) (0xc000683a40) Stream added, broadcasting: 5\nI0309 09:43:31.083580 3384 log.go:172] (0xc00096c000) Reply frame received for 5\nI0309 09:43:31.140830 3384 log.go:172] (0xc00096c000) Data frame received for 5\nI0309 09:43:31.140854 3384 log.go:172] (0xc000683a40) (5) Data frame handling\nI0309 09:43:31.140864 3384 log.go:172] (0xc000683a40) (5) Data frame sent\n+ nc -zv -t -w 2 10.103.164.128 80\nConnection to 10.103.164.128 80 port [tcp/http] succeeded!\nI0309 09:43:31.140882 3384 log.go:172] (0xc00096c000) Data frame received for 3\nI0309 09:43:31.140906 3384 log.go:172] (0xc000994000) (3) Data frame handling\nI0309 09:43:31.140931 3384 log.go:172] (0xc00096c000) Data frame received for 5\nI0309 09:43:31.140942 3384 log.go:172] (0xc000683a40) (5) Data frame handling\nI0309 09:43:31.142037 3384 log.go:172] (0xc00096c000) Data frame received for 1\nI0309 09:43:31.142061 3384 log.go:172] (0xc0007da000) (1) Data frame handling\nI0309 09:43:31.142074 3384 log.go:172] (0xc0007da000) (1) Data frame sent\nI0309 09:43:31.142091 3384 log.go:172] (0xc00096c000) (0xc0007da000) Stream removed, broadcasting: 1\nI0309 09:43:31.142104 3384 log.go:172] (0xc00096c000) Go away received\nI0309 09:43:31.142627 3384 log.go:172] (0xc00096c000) (0xc0007da000) Stream removed, broadcasting: 1\nI0309 09:43:31.142642 3384 log.go:172] (0xc00096c000) (0xc000994000) Stream removed, broadcasting: 3\nI0309 09:43:31.142650 3384 log.go:172] (0xc00096c000) (0xc000683a40) Stream removed, broadcasting: 5\n" Mar 9 09:43:31.145: INFO: stdout: "" Mar 9 09:43:31.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8648 execpodjlwlx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.4 30699' Mar 9 09:43:31.331: INFO: stderr: "I0309 09:43:31.274200 3404 log.go:172] (0xc0000f4580) (0xc00042f540) Create stream\nI0309 09:43:31.274242 3404 log.go:172] (0xc0000f4580) (0xc00042f540) Stream added, broadcasting: 1\nI0309 09:43:31.276064 3404 log.go:172] (0xc0000f4580) Reply frame received for 1\nI0309 09:43:31.276098 3404 log.go:172] (0xc0000f4580) (0xc0006fdae0) Create stream\nI0309 09:43:31.276110 3404 log.go:172] (0xc0000f4580) (0xc0006fdae0) Stream added, broadcasting: 3\nI0309 09:43:31.276887 3404 log.go:172] (0xc0000f4580) Reply frame received for 3\nI0309 09:43:31.276902 3404 log.go:172] (0xc0000f4580) (0xc000962000) Create stream\nI0309 09:43:31.276907 3404 log.go:172] (0xc0000f4580) (0xc000962000) Stream added, broadcasting: 5\nI0309 09:43:31.277556 3404 log.go:172] (0xc0000f4580) Reply frame received for 5\nI0309 09:43:31.326945 3404 log.go:172] (0xc0000f4580) Data frame received for 5\nI0309 09:43:31.326959 3404 log.go:172] (0xc000962000) (5) Data frame handling\nI0309 09:43:31.326966 3404 log.go:172] (0xc000962000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.4 30699\nI0309 09:43:31.327476 3404 log.go:172] (0xc0000f4580) Data frame received for 5\nI0309 09:43:31.327505 3404 log.go:172] (0xc000962000) (5) Data frame handling\nI0309 09:43:31.327527 3404 log.go:172] (0xc000962000) (5) Data frame sent\nI0309 09:43:31.327543 3404 log.go:172] (0xc0000f4580) Data frame received for 5\nI0309 09:43:31.327554 3404 log.go:172] (0xc000962000) (5) Data frame handling\nConnection to 172.17.0.4 30699 port [tcp/30699] succeeded!\nI0309 09:43:31.327651 3404 log.go:172] (0xc0000f4580) Data frame received for 3\nI0309 09:43:31.327664 3404 log.go:172] (0xc0006fdae0) (3) Data frame handling\nI0309 09:43:31.328792 3404 log.go:172] (0xc0000f4580) Data frame received for 1\nI0309 09:43:31.328863 3404 log.go:172] (0xc00042f540) (1) Data frame handling\nI0309 09:43:31.328879 3404 log.go:172] (0xc00042f540) (1) Data frame sent\nI0309 09:43:31.328892 3404 log.go:172] (0xc0000f4580) (0xc00042f540) Stream removed, broadcasting: 1\nI0309 09:43:31.328938 3404 log.go:172] (0xc0000f4580) Go away received\nI0309 09:43:31.329128 3404 log.go:172] (0xc0000f4580) (0xc00042f540) Stream removed, broadcasting: 1\nI0309 09:43:31.329143 3404 log.go:172] (0xc0000f4580) (0xc0006fdae0) Stream removed, broadcasting: 3\nI0309 09:43:31.329149 3404 log.go:172] (0xc0000f4580) (0xc000962000) Stream removed, broadcasting: 5\n" Mar 9 09:43:31.331: INFO: stdout: "" Mar 9 09:43:31.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8648 execpodjlwlx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.5 30699' Mar 9 09:43:31.492: INFO: stderr: "I0309 09:43:31.424976 3423 log.go:172] (0xc000bc8fd0) (0xc000a2c640) Create stream\nI0309 09:43:31.425011 3423 log.go:172] (0xc000bc8fd0) (0xc000a2c640) Stream added, broadcasting: 1\nI0309 09:43:31.427541 3423 log.go:172] (0xc000bc8fd0) Reply frame received for 1\nI0309 09:43:31.427603 3423 log.go:172] (0xc000bc8fd0) (0xc0009fe140) Create stream\nI0309 09:43:31.427616 3423 log.go:172] (0xc000bc8fd0) (0xc0009fe140) Stream added, broadcasting: 3\nI0309 09:43:31.429484 3423 log.go:172] (0xc000bc8fd0) Reply frame received for 3\nI0309 09:43:31.429502 3423 log.go:172] (0xc000bc8fd0) (0xc0009fe000) Create stream\nI0309 09:43:31.429511 3423 log.go:172] (0xc000bc8fd0) (0xc0009fe000) Stream added, broadcasting: 5\nI0309 09:43:31.430069 3423 log.go:172] (0xc000bc8fd0) Reply frame received for 5\nI0309 09:43:31.488112 3423 log.go:172] (0xc000bc8fd0) Data frame received for 3\nI0309 09:43:31.488131 3423 log.go:172] (0xc0009fe140) (3) Data frame handling\nI0309 09:43:31.488167 3423 log.go:172] (0xc000bc8fd0) Data frame received for 5\nI0309 09:43:31.488179 3423 log.go:172] (0xc0009fe000) (5) Data frame handling\nI0309 09:43:31.488193 3423 log.go:172] (0xc0009fe000) (5) Data frame sent\nI0309 09:43:31.488202 3423 log.go:172] (0xc000bc8fd0) Data frame received for 5\nI0309 09:43:31.488211 3423 log.go:172] (0xc0009fe000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.5 30699\nConnection to 172.17.0.5 30699 port [tcp/30699] succeeded!\nI0309 09:43:31.488871 3423 log.go:172] (0xc000bc8fd0) Data frame received for 1\nI0309 09:43:31.488890 3423 log.go:172] (0xc000a2c640) (1) Data frame handling\nI0309 09:43:31.488907 3423 log.go:172] (0xc000a2c640) (1) Data frame sent\nI0309 09:43:31.488955 3423 log.go:172] (0xc000bc8fd0) (0xc000a2c640) Stream removed, broadcasting: 1\nI0309 09:43:31.488971 3423 log.go:172] (0xc000bc8fd0) Go away received\nI0309 09:43:31.489302 3423 log.go:172] (0xc000bc8fd0) (0xc000a2c640) Stream removed, broadcasting: 1\nI0309 09:43:31.489313 3423 log.go:172] (0xc000bc8fd0) (0xc0009fe140) Stream removed, broadcasting: 3\nI0309 09:43:31.489318 3423 log.go:172] (0xc000bc8fd0) (0xc0009fe000) Stream removed, broadcasting: 5\n" Mar 9 09:43:31.492: INFO: stdout: "" Mar 9 09:43:31.492: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:43:31.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8648" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.272 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":242,"skipped":4059,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:43:31.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4999 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4999 I0309 09:43:31.715788 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4999, replica count: 2 I0309 09:43:34.766256 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 9 09:43:34.766: INFO: Creating new exec pod Mar 9 09:43:39.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4999 execpodklcr5 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 9 09:43:40.015: INFO: stderr: "I0309 09:43:39.941592 3446 log.go:172] (0xc000a03290) (0xc00096a500) Create stream\nI0309 09:43:39.941655 3446 log.go:172] (0xc000a03290) (0xc00096a500) Stream added, broadcasting: 1\nI0309 09:43:39.945745 3446 log.go:172] (0xc000a03290) Reply frame received for 1\nI0309 09:43:39.945788 3446 log.go:172] (0xc000a03290) (0xc0006e1cc0) Create stream\nI0309 09:43:39.945796 3446 log.go:172] (0xc000a03290) (0xc0006e1cc0) Stream added, broadcasting: 3\nI0309 09:43:39.946695 3446 log.go:172] (0xc000a03290) Reply frame received for 3\nI0309 09:43:39.946724 3446 log.go:172] (0xc000a03290) (0xc0006368c0) Create stream\nI0309 09:43:39.946733 3446 log.go:172] (0xc000a03290) (0xc0006368c0) Stream added, broadcasting: 5\nI0309 09:43:39.947534 3446 log.go:172] (0xc000a03290) Reply frame received for 5\nI0309 09:43:40.008978 3446 log.go:172] (0xc000a03290) Data frame received for 5\nI0309 09:43:40.009004 3446 log.go:172] (0xc0006368c0) (5) Data frame handling\nI0309 09:43:40.009017 3446 log.go:172] (0xc0006368c0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0309 09:43:40.010072 3446 log.go:172] (0xc000a03290) Data frame received for 5\nI0309 09:43:40.010092 3446 log.go:172] (0xc0006368c0) (5) Data frame handling\nI0309 09:43:40.010108 3446 log.go:172] (0xc0006368c0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0309 09:43:40.010732 3446 log.go:172] (0xc000a03290) Data frame received for 3\nI0309 09:43:40.010768 3446 log.go:172] (0xc0006e1cc0) (3) Data frame handling\nI0309 09:43:40.010796 3446 log.go:172] (0xc000a03290) Data frame received for 5\nI0309 09:43:40.010814 3446 log.go:172] (0xc0006368c0) (5) Data frame handling\nI0309 09:43:40.012091 3446 log.go:172] (0xc000a03290) Data frame received for 1\nI0309 09:43:40.012117 3446 log.go:172] (0xc00096a500) (1) Data frame handling\nI0309 09:43:40.012139 3446 log.go:172] (0xc00096a500) (1) Data frame sent\nI0309 09:43:40.012157 3446 log.go:172] (0xc000a03290) (0xc00096a500) Stream removed, broadcasting: 1\nI0309 09:43:40.012178 3446 log.go:172] (0xc000a03290) Go away received\nI0309 09:43:40.012548 3446 log.go:172] (0xc000a03290) (0xc00096a500) Stream removed, broadcasting: 1\nI0309 09:43:40.012569 3446 log.go:172] (0xc000a03290) (0xc0006e1cc0) Stream removed, broadcasting: 3\nI0309 09:43:40.012579 3446 log.go:172] (0xc000a03290) (0xc0006368c0) Stream removed, broadcasting: 5\n" Mar 9 09:43:40.016: INFO: stdout: "" Mar 9 09:43:40.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4999 execpodklcr5 -- /bin/sh -x -c nc -zv -t -w 2 10.100.134.19 80' Mar 9 09:43:40.201: INFO: stderr: "I0309 09:43:40.127623 3466 log.go:172] (0xc0008d7130) (0xc00094c6e0) Create stream\nI0309 09:43:40.127661 3466 log.go:172] (0xc0008d7130) (0xc00094c6e0) Stream added, broadcasting: 1\nI0309 09:43:40.132106 3466 log.go:172] (0xc0008d7130) Reply frame received for 1\nI0309 09:43:40.132133 3466 log.go:172] (0xc0008d7130) (0xc000214640) Create stream\nI0309 09:43:40.132140 3466 log.go:172] (0xc0008d7130) (0xc000214640) Stream added, broadcasting: 3\nI0309 09:43:40.134423 3466 log.go:172] (0xc0008d7130) Reply frame received for 3\nI0309 09:43:40.134442 3466 log.go:172] (0xc0008d7130) (0xc0005a2be0) Create stream\nI0309 09:43:40.134449 3466 log.go:172] (0xc0008d7130) (0xc0005a2be0) Stream added, broadcasting: 5\nI0309 09:43:40.135310 3466 log.go:172] (0xc0008d7130) Reply frame received for 5\nI0309 09:43:40.196075 3466 log.go:172] (0xc0008d7130) Data frame received for 5\nI0309 09:43:40.196097 3466 log.go:172] (0xc0005a2be0) (5) Data frame handling\nI0309 09:43:40.196107 3466 log.go:172] (0xc0005a2be0) (5) Data frame sent\nI0309 09:43:40.196114 3466 log.go:172] (0xc0008d7130) Data frame received for 5\nI0309 09:43:40.196120 3466 log.go:172] (0xc0005a2be0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.134.19 80\nConnection to 10.100.134.19 80 port [tcp/http] succeeded!\nI0309 09:43:40.196150 3466 log.go:172] (0xc0008d7130) Data frame received for 3\nI0309 09:43:40.196162 3466 log.go:172] (0xc000214640) (3) Data frame handling\nI0309 09:43:40.197819 3466 log.go:172] (0xc0008d7130) Data frame received for 1\nI0309 09:43:40.197843 3466 log.go:172] (0xc00094c6e0) (1) Data frame handling\nI0309 09:43:40.197857 3466 log.go:172] (0xc00094c6e0) (1) Data frame sent\nI0309 09:43:40.197871 3466 log.go:172] (0xc0008d7130) (0xc00094c6e0) Stream removed, broadcasting: 1\nI0309 09:43:40.197893 3466 log.go:172] (0xc0008d7130) Go away received\nI0309 09:43:40.198272 3466 log.go:172] (0xc0008d7130) (0xc00094c6e0) Stream removed, broadcasting: 1\nI0309 09:43:40.198296 3466 log.go:172] (0xc0008d7130) (0xc000214640) Stream removed, broadcasting: 3\nI0309 09:43:40.198307 3466 log.go:172] (0xc0008d7130) (0xc0005a2be0) Stream removed, broadcasting: 5\n" Mar 9 09:43:40.201: INFO: stdout: "" Mar 9 09:43:40.201: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:43:40.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4999" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:8.748 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":243,"skipped":4066,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:43:40.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 9 09:43:40.375: INFO: Waiting up to 5m0s for pod "pod-e36c1317-987a-4cb5-93ae-526934a8420c" in namespace "emptydir-3807" to be "success or failure" Mar 9 09:43:40.389: INFO: Pod "pod-e36c1317-987a-4cb5-93ae-526934a8420c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.998787ms Mar 9 09:43:42.392: INFO: Pod "pod-e36c1317-987a-4cb5-93ae-526934a8420c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01681157s STEP: Saw pod success Mar 9 09:43:42.392: INFO: Pod "pod-e36c1317-987a-4cb5-93ae-526934a8420c" satisfied condition "success or failure" Mar 9 09:43:42.394: INFO: Trying to get logs from node jerma-worker2 pod pod-e36c1317-987a-4cb5-93ae-526934a8420c container test-container: STEP: delete the pod Mar 9 09:43:42.446: INFO: Waiting for pod pod-e36c1317-987a-4cb5-93ae-526934a8420c to disappear Mar 9 09:43:42.449: INFO: Pod pod-e36c1317-987a-4cb5-93ae-526934a8420c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:43:42.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3807" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4082,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:43:42.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:43:42.557: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 9 09:43:42.577: INFO: Number of nodes with available pods: 0 Mar 9 09:43:42.577: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 9 09:43:42.625: INFO: Number of nodes with available pods: 0 Mar 9 09:43:42.625: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:43.629: INFO: Number of nodes with available pods: 0 Mar 9 09:43:43.629: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:44.629: INFO: Number of nodes with available pods: 1 Mar 9 09:43:44.629: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 9 09:43:44.660: INFO: Number of nodes with available pods: 1 Mar 9 09:43:44.660: INFO: Number of running nodes: 0, number of available pods: 1 Mar 9 09:43:45.668: INFO: Number of nodes with available pods: 0 Mar 9 09:43:45.668: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 9 09:43:45.715: INFO: Number of nodes with available pods: 0 Mar 9 09:43:45.715: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:46.720: INFO: Number of nodes with available pods: 0 Mar 9 09:43:46.720: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:47.719: INFO: Number of nodes with available pods: 0 Mar 9 09:43:47.719: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:48.721: INFO: Number of nodes with available pods: 0 Mar 9 09:43:48.721: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:49.720: INFO: Number of nodes with available pods: 0 Mar 9 09:43:49.720: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:50.722: INFO: Number of nodes with available pods: 0 Mar 9 09:43:50.722: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:51.720: INFO: Number of nodes with available pods: 0 Mar 9 09:43:51.720: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:52.721: INFO: Number of nodes with available pods: 0 Mar 9 09:43:52.721: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:53.719: INFO: Number of nodes with available pods: 0 Mar 9 09:43:53.719: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:54.718: INFO: Number of nodes with available pods: 0 Mar 9 09:43:54.718: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:55.720: INFO: Number of nodes with available pods: 0 Mar 9 09:43:55.720: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:56.720: INFO: Number of nodes with available pods: 0 Mar 9 09:43:56.720: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:57.719: INFO: Number of nodes with available pods: 0 Mar 9 09:43:57.719: INFO: Node jerma-worker is running more than one daemon pod Mar 9 09:43:58.739: INFO: Number of nodes with available pods: 1 Mar 9 09:43:58.739: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3457, will wait for the garbage collector to delete the pods Mar 9 09:43:58.803: INFO: Deleting DaemonSet.extensions daemon-set took: 5.762952ms Mar 9 09:43:58.903: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.250803ms Mar 9 09:44:06.106: INFO: Number of nodes with available pods: 0 Mar 9 09:44:06.106: INFO: Number of running nodes: 0, number of available pods: 0 Mar 9 09:44:06.108: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3457/daemonsets","resourceVersion":"278809"},"items":null} Mar 9 09:44:06.122: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3457/pods","resourceVersion":"278809"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:44:06.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3457" for this suite. • [SLOW TEST:23.702 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":245,"skipped":4091,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:44:06.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:44:06.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2235" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":246,"skipped":4112,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:44:06.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-72215a42-cfb7-4947-b161-6ced1ba2819e STEP: Creating a pod to test consume configMaps Mar 9 09:44:06.380: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e0f0010a-e9d5-4fc6-9078-25bff8b0ab97" in namespace "projected-9934" to be "success or failure" Mar 9 09:44:06.384: INFO: Pod "pod-projected-configmaps-e0f0010a-e9d5-4fc6-9078-25bff8b0ab97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029312ms Mar 9 09:44:08.388: INFO: Pod "pod-projected-configmaps-e0f0010a-e9d5-4fc6-9078-25bff8b0ab97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007523847s STEP: Saw pod success Mar 9 09:44:08.388: INFO: Pod "pod-projected-configmaps-e0f0010a-e9d5-4fc6-9078-25bff8b0ab97" satisfied condition "success or failure" Mar 9 09:44:08.390: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e0f0010a-e9d5-4fc6-9078-25bff8b0ab97 container projected-configmap-volume-test: STEP: delete the pod Mar 9 09:44:08.448: INFO: Waiting for pod pod-projected-configmaps-e0f0010a-e9d5-4fc6-9078-25bff8b0ab97 to disappear Mar 9 09:44:08.457: INFO: Pod pod-projected-configmaps-e0f0010a-e9d5-4fc6-9078-25bff8b0ab97 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:44:08.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9934" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4128,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:44:08.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:44:08.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7604" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4139,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:44:08.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:44:13.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9423" for this suite. • [SLOW TEST:5.213 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":249,"skipped":4152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:44:13.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:44:13.993: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df81cb71-d95a-49c5-89e6-bc423f06d57e" in namespace "downward-api-6331" to be "success or failure" Mar 9 09:44:13.997: INFO: Pod "downwardapi-volume-df81cb71-d95a-49c5-89e6-bc423f06d57e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.996962ms Mar 9 09:44:16.004: INFO: Pod "downwardapi-volume-df81cb71-d95a-49c5-89e6-bc423f06d57e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011052424s STEP: Saw pod success Mar 9 09:44:16.004: INFO: Pod "downwardapi-volume-df81cb71-d95a-49c5-89e6-bc423f06d57e" satisfied condition "success or failure" Mar 9 09:44:16.006: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-df81cb71-d95a-49c5-89e6-bc423f06d57e container client-container: STEP: delete the pod Mar 9 09:44:16.046: INFO: Waiting for pod downwardapi-volume-df81cb71-d95a-49c5-89e6-bc423f06d57e to disappear Mar 9 09:44:16.051: INFO: Pod downwardapi-volume-df81cb71-d95a-49c5-89e6-bc423f06d57e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:44:16.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6331" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4192,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:44:16.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-2wff STEP: Creating a pod to test atomic-volume-subpath Mar 9 09:44:16.154: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2wff" in namespace "subpath-7718" to be "success or failure" Mar 9 09:44:16.158: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.5076ms Mar 9 09:44:18.162: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Running", Reason="", readiness=true. Elapsed: 2.008317821s Mar 9 09:44:20.166: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Running", Reason="", readiness=true. Elapsed: 4.012375453s Mar 9 09:44:22.169: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Running", Reason="", readiness=true. Elapsed: 6.015530348s Mar 9 09:44:24.174: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Running", Reason="", readiness=true. Elapsed: 8.020180476s Mar 9 09:44:26.178: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Running", Reason="", readiness=true. Elapsed: 10.024104559s Mar 9 09:44:28.183: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Running", Reason="", readiness=true. Elapsed: 12.029391157s Mar 9 09:44:30.187: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Running", Reason="", readiness=true. Elapsed: 14.033422842s Mar 9 09:44:32.192: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Running", Reason="", readiness=true. Elapsed: 16.0378744s Mar 9 09:44:34.196: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Running", Reason="", readiness=true. Elapsed: 18.042311725s Mar 9 09:44:36.200: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Running", Reason="", readiness=true. Elapsed: 20.046742236s Mar 9 09:44:38.204: INFO: Pod "pod-subpath-test-configmap-2wff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.050233307s STEP: Saw pod success Mar 9 09:44:38.204: INFO: Pod "pod-subpath-test-configmap-2wff" satisfied condition "success or failure" Mar 9 09:44:38.206: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-2wff container test-container-subpath-configmap-2wff: STEP: delete the pod Mar 9 09:44:38.263: INFO: Waiting for pod pod-subpath-test-configmap-2wff to disappear Mar 9 09:44:38.266: INFO: Pod pod-subpath-test-configmap-2wff no longer exists STEP: Deleting pod pod-subpath-test-configmap-2wff Mar 9 09:44:38.266: INFO: Deleting pod "pod-subpath-test-configmap-2wff" in namespace "subpath-7718" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:44:38.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7718" for this suite. • [SLOW TEST:22.215 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":251,"skipped":4196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:44:38.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 9 09:44:38.332: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 9 09:44:39.468: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 9 09:44:41.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719343879, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719343879, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719343879, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719343879, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 9 09:44:44.212: INFO: Waited 617.917799ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:44:44.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5783" for this suite. • [SLOW TEST:6.475 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":252,"skipped":4227,"failed":0} [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:44:44.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:44:44.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12ee4080-0a49-49e3-b19a-45fcad82d02e" in namespace "downward-api-9693" to be "success or failure" Mar 9 09:44:44.854: INFO: Pod "downwardapi-volume-12ee4080-0a49-49e3-b19a-45fcad82d02e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.559118ms Mar 9 09:44:46.857: INFO: Pod "downwardapi-volume-12ee4080-0a49-49e3-b19a-45fcad82d02e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008183319s Mar 9 09:44:48.861: INFO: Pod "downwardapi-volume-12ee4080-0a49-49e3-b19a-45fcad82d02e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011909724s STEP: Saw pod success Mar 9 09:44:48.861: INFO: Pod "downwardapi-volume-12ee4080-0a49-49e3-b19a-45fcad82d02e" satisfied condition "success or failure" Mar 9 09:44:48.865: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-12ee4080-0a49-49e3-b19a-45fcad82d02e container client-container: STEP: delete the pod Mar 9 09:44:48.901: INFO: Waiting for pod downwardapi-volume-12ee4080-0a49-49e3-b19a-45fcad82d02e to disappear Mar 9 09:44:48.908: INFO: Pod downwardapi-volume-12ee4080-0a49-49e3-b19a-45fcad82d02e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:44:48.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9693" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4227,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:44:48.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 9 09:44:51.528: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7305 pod-service-account-612b8903-3465-4143-bd7c-d9a80c96432b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 9 09:44:51.737: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7305 pod-service-account-612b8903-3465-4143-bd7c-d9a80c96432b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 9 09:44:51.938: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7305 pod-service-account-612b8903-3465-4143-bd7c-d9a80c96432b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:44:52.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7305" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":254,"skipped":4249,"failed":0} ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:44:52.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 9 09:44:52.361: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9129 /api/v1/namespaces/watch-9129/configmaps/e2e-watch-test-resource-version b4369932-2efd-4db7-b68d-2dd83de12d0e 279214 0 2020-03-09 09:44:52 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 9 09:44:52.361: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9129 /api/v1/namespaces/watch-9129/configmaps/e2e-watch-test-resource-version b4369932-2efd-4db7-b68d-2dd83de12d0e 279215 0 2020-03-09 09:44:52 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:44:52.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9129" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":255,"skipped":4249,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:44:52.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-fd82b4ca-e9f1-4b3e-b55f-d8c2a1feabb1 Mar 9 09:44:52.482: INFO: Pod name my-hostname-basic-fd82b4ca-e9f1-4b3e-b55f-d8c2a1feabb1: Found 0 pods out of 1 Mar 9 09:44:57.488: INFO: Pod name my-hostname-basic-fd82b4ca-e9f1-4b3e-b55f-d8c2a1feabb1: Found 1 pods out of 1 Mar 9 09:44:57.488: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-fd82b4ca-e9f1-4b3e-b55f-d8c2a1feabb1" are running Mar 9 09:44:57.490: INFO: Pod "my-hostname-basic-fd82b4ca-e9f1-4b3e-b55f-d8c2a1feabb1-jst8z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 09:44:52 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 09:44:53 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 09:44:53 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-09 09:44:52 +0000 UTC Reason: Message:}]) Mar 9 09:44:57.490: INFO: Trying to dial the pod Mar 9 09:45:02.501: INFO: Controller my-hostname-basic-fd82b4ca-e9f1-4b3e-b55f-d8c2a1feabb1: Got expected result from replica 1 [my-hostname-basic-fd82b4ca-e9f1-4b3e-b55f-d8c2a1feabb1-jst8z]: "my-hostname-basic-fd82b4ca-e9f1-4b3e-b55f-d8c2a1feabb1-jst8z", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:45:02.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6928" for this suite. • [SLOW TEST:10.143 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":256,"skipped":4272,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:45:02.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:45:08.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3676" for this suite. STEP: Destroying namespace "nsdeletetest-5966" for this suite. Mar 9 09:45:08.825: INFO: Namespace nsdeletetest-5966 was already deleted STEP: Destroying namespace "nsdeletetest-4315" for this suite. • [SLOW TEST:6.318 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":257,"skipped":4286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:45:08.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:45:08.898: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:45:13.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2783" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4315,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:45:13.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 9 09:45:13.680: INFO: Pod name wrapped-volume-race-1bbf0d8d-9d3f-445f-aa8e-176ab7eb8879: Found 0 pods out of 5 Mar 9 09:45:18.685: INFO: Pod name wrapped-volume-race-1bbf0d8d-9d3f-445f-aa8e-176ab7eb8879: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1bbf0d8d-9d3f-445f-aa8e-176ab7eb8879 in namespace emptydir-wrapper-6889, will wait for the garbage collector to delete the pods Mar 9 09:45:28.791: INFO: Deleting ReplicationController wrapped-volume-race-1bbf0d8d-9d3f-445f-aa8e-176ab7eb8879 took: 14.614127ms Mar 9 09:45:29.191: INFO: Terminating ReplicationController wrapped-volume-race-1bbf0d8d-9d3f-445f-aa8e-176ab7eb8879 pods took: 400.288175ms STEP: Creating RC which spawns configmap-volume pods Mar 9 09:45:35.238: INFO: Pod name wrapped-volume-race-f6443729-8d4d-4e53-bace-7e4979395df3: Found 0 pods out of 5 Mar 9 09:45:40.242: INFO: Pod name wrapped-volume-race-f6443729-8d4d-4e53-bace-7e4979395df3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f6443729-8d4d-4e53-bace-7e4979395df3 in namespace emptydir-wrapper-6889, will wait for the garbage collector to delete the pods Mar 9 09:45:52.348: INFO: Deleting ReplicationController wrapped-volume-race-f6443729-8d4d-4e53-bace-7e4979395df3 took: 6.784103ms Mar 9 09:45:52.748: INFO: Terminating ReplicationController wrapped-volume-race-f6443729-8d4d-4e53-bace-7e4979395df3 pods took: 400.352794ms STEP: Creating RC which spawns configmap-volume pods Mar 9 09:45:58.384: INFO: Pod name wrapped-volume-race-415ba18e-2cb1-4a3a-a85b-f804c4f7334a: Found 0 pods out of 5 Mar 9 09:46:03.390: INFO: Pod name wrapped-volume-race-415ba18e-2cb1-4a3a-a85b-f804c4f7334a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-415ba18e-2cb1-4a3a-a85b-f804c4f7334a in namespace emptydir-wrapper-6889, will wait for the garbage collector to delete the pods Mar 9 09:46:15.552: INFO: Deleting ReplicationController wrapped-volume-race-415ba18e-2cb1-4a3a-a85b-f804c4f7334a took: 6.063984ms Mar 9 09:46:15.952: INFO: Terminating ReplicationController wrapped-volume-race-415ba18e-2cb1-4a3a-a85b-f804c4f7334a pods took: 400.259353ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:46:21.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6889" for this suite. • [SLOW TEST:68.637 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":259,"skipped":4334,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:46:21.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 9 09:46:21.762: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1570 /api/v1/namespaces/watch-1570/configmaps/e2e-watch-test-label-changed e2914b95-8ffe-4083-aba6-caac9c1cd0d7 280221 0 2020-03-09 09:46:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 9 09:46:21.762: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1570 /api/v1/namespaces/watch-1570/configmaps/e2e-watch-test-label-changed e2914b95-8ffe-4083-aba6-caac9c1cd0d7 280222 0 2020-03-09 09:46:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 9 09:46:21.763: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1570 /api/v1/namespaces/watch-1570/configmaps/e2e-watch-test-label-changed e2914b95-8ffe-4083-aba6-caac9c1cd0d7 280223 0 2020-03-09 09:46:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 9 09:46:31.816: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1570 /api/v1/namespaces/watch-1570/configmaps/e2e-watch-test-label-changed e2914b95-8ffe-4083-aba6-caac9c1cd0d7 280450 0 2020-03-09 09:46:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 9 09:46:31.816: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1570 /api/v1/namespaces/watch-1570/configmaps/e2e-watch-test-label-changed e2914b95-8ffe-4083-aba6-caac9c1cd0d7 280451 0 2020-03-09 09:46:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 9 09:46:31.816: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1570 /api/v1/namespaces/watch-1570/configmaps/e2e-watch-test-label-changed e2914b95-8ffe-4083-aba6-caac9c1cd0d7 280452 0 2020-03-09 09:46:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:46:31.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1570" for this suite. • [SLOW TEST:10.138 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":260,"skipped":4341,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:46:31.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-ce53eb27-8b05-4ce2-96e4-d25b2c2c9e10 in namespace container-probe-2592 Mar 9 09:46:33.919: INFO: Started pod liveness-ce53eb27-8b05-4ce2-96e4-d25b2c2c9e10 in namespace container-probe-2592 STEP: checking the pod's current state and verifying that restartCount is present Mar 9 09:46:33.922: INFO: Initial restart count of pod liveness-ce53eb27-8b05-4ce2-96e4-d25b2c2c9e10 is 0 Mar 9 09:46:52.022: INFO: Restart count of pod container-probe-2592/liveness-ce53eb27-8b05-4ce2-96e4-d25b2c2c9e10 is now 1 (18.099539713s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:46:52.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2592" for this suite. • [SLOW TEST:20.243 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4351,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:46:52.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-2e433440-f098-4728-906d-0f21b1785afe STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:46:56.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6285" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:46:56.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:46:56.237: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 9 09:46:59.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1638 create -f -' Mar 9 09:47:02.508: INFO: stderr: "" Mar 9 09:47:02.508: INFO: stdout: "e2e-test-crd-publish-openapi-9755-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 9 09:47:02.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1638 delete e2e-test-crd-publish-openapi-9755-crds test-foo' Mar 9 09:47:02.596: INFO: stderr: "" Mar 9 09:47:02.596: INFO: stdout: "e2e-test-crd-publish-openapi-9755-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 9 09:47:02.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1638 apply -f -' Mar 9 09:47:02.842: INFO: stderr: "" Mar 9 09:47:02.842: INFO: stdout: "e2e-test-crd-publish-openapi-9755-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 9 09:47:02.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1638 delete e2e-test-crd-publish-openapi-9755-crds test-foo' Mar 9 09:47:02.967: INFO: stderr: "" Mar 9 09:47:02.967: INFO: stdout: "e2e-test-crd-publish-openapi-9755-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 9 09:47:02.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1638 create -f -' Mar 9 09:47:03.194: INFO: rc: 1 Mar 9 09:47:03.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1638 apply -f -' Mar 9 09:47:03.411: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 9 09:47:03.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1638 create -f -' Mar 9 09:47:03.633: INFO: rc: 1 Mar 9 09:47:03.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1638 apply -f -' Mar 9 09:47:03.849: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 9 09:47:03.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9755-crds' Mar 9 09:47:04.087: INFO: stderr: "" Mar 9 09:47:04.087: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9755-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 9 09:47:04.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9755-crds.metadata' Mar 9 09:47:04.305: INFO: stderr: "" Mar 9 09:47:04.305: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9755-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 9 09:47:04.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9755-crds.spec' Mar 9 09:47:04.530: INFO: stderr: "" Mar 9 09:47:04.531: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9755-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 9 09:47:04.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9755-crds.spec.bars' Mar 9 09:47:04.813: INFO: stderr: "" Mar 9 09:47:04.813: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9755-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 9 09:47:04.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9755-crds.spec.bars2' Mar 9 09:47:05.049: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:47:07.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1638" for this suite. • [SLOW TEST:11.756 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":263,"skipped":4386,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:47:07.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ec83dc1b-3126-4526-8c98-1c8f9bc4ee96 STEP: Creating a pod to test consume secrets Mar 9 09:47:08.126: INFO: Waiting up to 5m0s for pod "pod-secrets-720ab273-39f5-423a-96a3-827ff62a43b5" in namespace "secrets-8955" to be "success or failure" Mar 9 09:47:08.133: INFO: Pod "pod-secrets-720ab273-39f5-423a-96a3-827ff62a43b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.911422ms Mar 9 09:47:10.136: INFO: Pod "pod-secrets-720ab273-39f5-423a-96a3-827ff62a43b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010697172s Mar 9 09:47:12.140: INFO: Pod "pod-secrets-720ab273-39f5-423a-96a3-827ff62a43b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014361239s STEP: Saw pod success Mar 9 09:47:12.140: INFO: Pod "pod-secrets-720ab273-39f5-423a-96a3-827ff62a43b5" satisfied condition "success or failure" Mar 9 09:47:12.143: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-720ab273-39f5-423a-96a3-827ff62a43b5 container secret-volume-test: STEP: delete the pod Mar 9 09:47:12.193: INFO: Waiting for pod pod-secrets-720ab273-39f5-423a-96a3-827ff62a43b5 to disappear Mar 9 09:47:12.204: INFO: Pod pod-secrets-720ab273-39f5-423a-96a3-827ff62a43b5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:47:12.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8955" for this suite. STEP: Destroying namespace "secret-namespace-5734" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4398,"failed":0} S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:47:12.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-82608368-fcf7-4a0e-87a5-9849560465dd STEP: Creating configMap with name cm-test-opt-upd-1542fab1-e2f4-4e42-801d-97003c9f4336 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-82608368-fcf7-4a0e-87a5-9849560465dd STEP: Updating configmap cm-test-opt-upd-1542fab1-e2f4-4e42-801d-97003c9f4336 STEP: Creating configMap with name cm-test-opt-create-3b0e53b5-50f7-4d1b-acb2-57c05961aef5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:47:18.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5898" for this suite. • [SLOW TEST:6.200 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4399,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:47:18.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:47:18.489: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4f7d4b6-22bd-4839-83e4-3ad16b73a92a" in namespace "projected-6849" to be "success or failure" Mar 9 09:47:18.505: INFO: Pod "downwardapi-volume-d4f7d4b6-22bd-4839-83e4-3ad16b73a92a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.48813ms Mar 9 09:47:20.508: INFO: Pod "downwardapi-volume-d4f7d4b6-22bd-4839-83e4-3ad16b73a92a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018938319s STEP: Saw pod success Mar 9 09:47:20.509: INFO: Pod "downwardapi-volume-d4f7d4b6-22bd-4839-83e4-3ad16b73a92a" satisfied condition "success or failure" Mar 9 09:47:20.511: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d4f7d4b6-22bd-4839-83e4-3ad16b73a92a container client-container: STEP: delete the pod Mar 9 09:47:20.544: INFO: Waiting for pod downwardapi-volume-d4f7d4b6-22bd-4839-83e4-3ad16b73a92a to disappear Mar 9 09:47:20.552: INFO: Pod downwardapi-volume-d4f7d4b6-22bd-4839-83e4-3ad16b73a92a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:47:20.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6849" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4409,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:47:20.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:47:20.625: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c5a6589-e5dd-4f6b-b2e8-0c3422c6b0ec" in namespace "projected-95" to be "success or failure" Mar 9 09:47:20.630: INFO: Pod "downwardapi-volume-7c5a6589-e5dd-4f6b-b2e8-0c3422c6b0ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196619ms Mar 9 09:47:22.633: INFO: Pod "downwardapi-volume-7c5a6589-e5dd-4f6b-b2e8-0c3422c6b0ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007969789s STEP: Saw pod success Mar 9 09:47:22.633: INFO: Pod "downwardapi-volume-7c5a6589-e5dd-4f6b-b2e8-0c3422c6b0ec" satisfied condition "success or failure" Mar 9 09:47:22.636: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7c5a6589-e5dd-4f6b-b2e8-0c3422c6b0ec container client-container: STEP: delete the pod Mar 9 09:47:22.656: INFO: Waiting for pod downwardapi-volume-7c5a6589-e5dd-4f6b-b2e8-0c3422c6b0ec to disappear Mar 9 09:47:22.672: INFO: Pod downwardapi-volume-7c5a6589-e5dd-4f6b-b2e8-0c3422c6b0ec no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:47:22.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-95" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4411,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:47:22.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 9 09:47:22.739: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:47:28.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4800" for this suite. • [SLOW TEST:5.875 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":268,"skipped":4418,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:47:28.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:47:28.633: INFO: Waiting up to 5m0s for pod "downwardapi-volume-287dc75a-631e-45b4-9eca-ac8d1f17ac75" in namespace "projected-7491" to be "success or failure" Mar 9 09:47:28.638: INFO: Pod "downwardapi-volume-287dc75a-631e-45b4-9eca-ac8d1f17ac75": Phase="Pending", Reason="", readiness=false. Elapsed: 5.380688ms Mar 9 09:47:30.641: INFO: Pod "downwardapi-volume-287dc75a-631e-45b4-9eca-ac8d1f17ac75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008163052s STEP: Saw pod success Mar 9 09:47:30.641: INFO: Pod "downwardapi-volume-287dc75a-631e-45b4-9eca-ac8d1f17ac75" satisfied condition "success or failure" Mar 9 09:47:30.643: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-287dc75a-631e-45b4-9eca-ac8d1f17ac75 container client-container: STEP: delete the pod Mar 9 09:47:30.655: INFO: Waiting for pod downwardapi-volume-287dc75a-631e-45b4-9eca-ac8d1f17ac75 to disappear Mar 9 09:47:30.661: INFO: Pod downwardapi-volume-287dc75a-631e-45b4-9eca-ac8d1f17ac75 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:47:30.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7491" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:47:30.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9757/configmap-test-fd939aa6-1846-4ad0-8451-e22ece818d78 STEP: Creating a pod to test consume configMaps Mar 9 09:47:30.746: INFO: Waiting up to 5m0s for pod "pod-configmaps-bce87149-62c6-432b-b61e-abca497f9951" in namespace "configmap-9757" to be "success or failure" Mar 9 09:47:30.774: INFO: Pod "pod-configmaps-bce87149-62c6-432b-b61e-abca497f9951": Phase="Pending", Reason="", readiness=false. Elapsed: 28.079094ms Mar 9 09:47:32.778: INFO: Pod "pod-configmaps-bce87149-62c6-432b-b61e-abca497f9951": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.03235192s STEP: Saw pod success Mar 9 09:47:32.779: INFO: Pod "pod-configmaps-bce87149-62c6-432b-b61e-abca497f9951" satisfied condition "success or failure" Mar 9 09:47:32.781: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-bce87149-62c6-432b-b61e-abca497f9951 container env-test: STEP: delete the pod Mar 9 09:47:32.819: INFO: Waiting for pod pod-configmaps-bce87149-62c6-432b-b61e-abca497f9951 to disappear Mar 9 09:47:32.828: INFO: Pod pod-configmaps-bce87149-62c6-432b-b61e-abca497f9951 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:47:32.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9757" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4463,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:47:32.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 9 09:47:32.944: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30214c3f-1ec3-4c22-94e5-b045554894cb" in namespace "projected-8397" to be "success or failure" Mar 9 09:47:32.948: INFO: Pod "downwardapi-volume-30214c3f-1ec3-4c22-94e5-b045554894cb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.594929ms Mar 9 09:47:34.951: INFO: Pod "downwardapi-volume-30214c3f-1ec3-4c22-94e5-b045554894cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007175048s Mar 9 09:47:36.955: INFO: Pod "downwardapi-volume-30214c3f-1ec3-4c22-94e5-b045554894cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011356946s STEP: Saw pod success Mar 9 09:47:36.955: INFO: Pod "downwardapi-volume-30214c3f-1ec3-4c22-94e5-b045554894cb" satisfied condition "success or failure" Mar 9 09:47:36.958: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-30214c3f-1ec3-4c22-94e5-b045554894cb container client-container: STEP: delete the pod Mar 9 09:47:37.003: INFO: Waiting for pod downwardapi-volume-30214c3f-1ec3-4c22-94e5-b045554894cb to disappear Mar 9 09:47:37.008: INFO: Pod downwardapi-volume-30214c3f-1ec3-4c22-94e5-b045554894cb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:47:37.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8397" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4465,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:47:37.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:47:38.171: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:47:41.216: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:47:41.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7307" for this suite. STEP: Destroying namespace "webhook-7307-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":272,"skipped":4482,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:47:41.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3315 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3315 STEP: Creating statefulset with conflicting port in namespace statefulset-3315 STEP: Waiting until pod test-pod will start running in namespace statefulset-3315 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3315 Mar 9 09:47:45.584: INFO: Observed stateful pod in namespace: statefulset-3315, name: ss-0, uid: e931e24d-3efb-4de8-b4b2-f22186f03917, status phase: Pending. Waiting for statefulset controller to delete. Mar 9 09:47:46.106: INFO: Observed stateful pod in namespace: statefulset-3315, name: ss-0, uid: e931e24d-3efb-4de8-b4b2-f22186f03917, status phase: Failed. Waiting for statefulset controller to delete. Mar 9 09:47:46.118: INFO: Observed stateful pod in namespace: statefulset-3315, name: ss-0, uid: e931e24d-3efb-4de8-b4b2-f22186f03917, status phase: Failed. Waiting for statefulset controller to delete. Mar 9 09:47:46.124: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3315 STEP: Removing pod with conflicting port in namespace statefulset-3315 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3315 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 9 09:47:50.201: INFO: Deleting all statefulset in ns statefulset-3315 Mar 9 09:47:50.204: INFO: Scaling statefulset ss to 0 Mar 9 09:48:00.224: INFO: Waiting for statefulset status.replicas updated to 0 Mar 9 09:48:00.227: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:48:00.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3315" for this suite. • [SLOW TEST:18.898 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":273,"skipped":4486,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:48:00.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9dc6f93e-9ea0-40f4-a8bf-e5ae7029d5ce STEP: Creating a pod to test consume secrets Mar 9 09:48:00.287: INFO: Waiting up to 5m0s for pod "pod-secrets-ebeed152-ff0f-4dbf-9c66-77157bb3a04f" in namespace "secrets-7559" to be "success or failure" Mar 9 09:48:00.292: INFO: Pod "pod-secrets-ebeed152-ff0f-4dbf-9c66-77157bb3a04f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291389ms Mar 9 09:48:02.302: INFO: Pod "pod-secrets-ebeed152-ff0f-4dbf-9c66-77157bb3a04f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014930989s STEP: Saw pod success Mar 9 09:48:02.302: INFO: Pod "pod-secrets-ebeed152-ff0f-4dbf-9c66-77157bb3a04f" satisfied condition "success or failure" Mar 9 09:48:02.306: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-ebeed152-ff0f-4dbf-9c66-77157bb3a04f container secret-volume-test: STEP: delete the pod Mar 9 09:48:02.338: INFO: Waiting for pod pod-secrets-ebeed152-ff0f-4dbf-9c66-77157bb3a04f to disappear Mar 9 09:48:02.345: INFO: Pod pod-secrets-ebeed152-ff0f-4dbf-9c66-77157bb3a04f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:48:02.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7559" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4538,"failed":0} ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:48:02.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:48:06.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7330" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":275,"skipped":4538,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:48:07.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 9 09:48:07.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 9 09:48:07.341: INFO: stderr: "" Mar 9 09:48:07.341: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:48:07.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4574" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":276,"skipped":4538,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:48:07.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1861 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 9 09:48:07.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7225' Mar 9 09:48:07.538: INFO: stderr: "" Mar 9 09:48:07.538: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1866 Mar 9 09:48:07.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7225' Mar 9 09:48:16.005: INFO: stderr: "" Mar 9 09:48:16.005: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:48:16.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7225" for this suite. • [SLOW TEST:8.692 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1857 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":277,"skipped":4541,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 9 09:48:16.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 9 09:48:16.557: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 9 09:48:18.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719344096, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719344096, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719344096, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719344096, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 9 09:48:21.609: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 9 09:48:22.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4592" for this suite. STEP: Destroying namespace "webhook-4592-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.077 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":278,"skipped":4562,"failed":0} SSSMar 9 09:48:22.119: INFO: Running AfterSuite actions on all nodes Mar 9 09:48:22.119: INFO: Running AfterSuite actions on node 1 Mar 9 09:48:22.119: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4565,"failed":0} Ran 278 of 4843 Specs in 3920.864 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4565 Skipped PASS