I0316 21:07:02.193618 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0316 21:07:02.193868 6 e2e.go:109] Starting e2e run "1cc8f662-dcb7-4362-9887-8f3eba70548e" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584392821 - Will randomize all specs Will run 278 of 4843 specs Mar 16 21:07:02.243: INFO: >>> kubeConfig: /root/.kube/config Mar 16 21:07:02.249: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 16 21:07:02.276: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 16 21:07:02.316: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 16 21:07:02.316: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 16 21:07:02.316: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 16 21:07:02.326: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 16 21:07:02.326: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 16 21:07:02.326: INFO: e2e test version: v1.17.3 Mar 16 21:07:02.327: INFO: kube-apiserver version: v1.17.2 Mar 16 21:07:02.327: INFO: >>> kubeConfig: /root/.kube/config Mar 16 21:07:02.330: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:07:02.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Mar 16 21:07:02.391: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 16 21:07:02.392: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:07:02.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8267" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":1,"skipped":34,"failed":0} ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:07:02.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 16 21:07:12.621: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4180 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:07:12.621: INFO: >>> kubeConfig: /root/.kube/config I0316 21:07:12.657222 6 log.go:172] (0xc002b18dc0) (0xc002083ea0) Create stream I0316 21:07:12.657255 6 log.go:172] (0xc002b18dc0) (0xc002083ea0) Stream added, broadcasting: 1 I0316 21:07:12.659719 6 log.go:172] (0xc002b18dc0) Reply frame received for 1 I0316 21:07:12.659764 6 log.go:172] (0xc002b18dc0) (0xc001c4e000) Create stream I0316 21:07:12.659774 6 log.go:172] (0xc002b18dc0) (0xc001c4e000) Stream added, broadcasting: 3 I0316 21:07:12.660751 6 log.go:172] (0xc002b18dc0) Reply frame received for 3 I0316 21:07:12.660799 6 log.go:172] (0xc002b18dc0) (0xc002083f40) Create stream I0316 21:07:12.660817 6 log.go:172] (0xc002b18dc0) (0xc002083f40) Stream added, broadcasting: 5 I0316 21:07:12.661805 6 log.go:172] (0xc002b18dc0) Reply frame received for 5 I0316 21:07:12.730695 6 log.go:172] (0xc002b18dc0) Data frame received for 5 I0316 21:07:12.730719 6 log.go:172] (0xc002083f40) (5) Data frame handling I0316 21:07:12.730770 6 log.go:172] (0xc002b18dc0) Data frame received for 3 I0316 21:07:12.730801 6 log.go:172] (0xc001c4e000) (3) Data frame handling I0316 21:07:12.730816 6 log.go:172] (0xc001c4e000) (3) Data frame sent I0316 21:07:12.730825 6 log.go:172] (0xc002b18dc0) Data frame received for 3 I0316 21:07:12.730834 6 log.go:172] (0xc001c4e000) (3) Data frame handling I0316 21:07:12.731644 6 log.go:172] (0xc002b18dc0) Data frame received for 1 I0316 21:07:12.731661 6 log.go:172] (0xc002083ea0) (1) Data frame handling I0316 21:07:12.731669 6 log.go:172] (0xc002083ea0) (1) Data frame sent I0316 21:07:12.731683 6 log.go:172] (0xc002b18dc0) (0xc002083ea0) Stream removed, broadcasting: 1 I0316 21:07:12.731715 6 log.go:172] (0xc002b18dc0) Go away received I0316 21:07:12.732019 6 log.go:172] (0xc002b18dc0) (0xc002083ea0) Stream removed, broadcasting: 1 I0316 21:07:12.732031 6 log.go:172] (0xc002b18dc0) (0xc001c4e000) Stream removed, broadcasting: 3 I0316 21:07:12.732037 6 log.go:172] (0xc002b18dc0) (0xc002083f40) Stream removed, broadcasting: 5 Mar 16 21:07:12.732: INFO: Exec stderr: "" Mar 16 21:07:12.732: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4180 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:07:12.732: INFO: >>> kubeConfig: /root/.kube/config I0316 21:07:12.755977 6 log.go:172] (0xc002e21550) (0xc001b48280) Create stream I0316 21:07:12.756002 6 log.go:172] (0xc002e21550) (0xc001b48280) Stream added, broadcasting: 1 I0316 21:07:12.758431 6 log.go:172] (0xc002e21550) Reply frame received for 1 I0316 21:07:12.758465 6 log.go:172] (0xc002e21550) (0xc001c0ce60) Create stream I0316 21:07:12.758477 6 log.go:172] (0xc002e21550) (0xc001c0ce60) Stream added, broadcasting: 3 I0316 21:07:12.759242 6 log.go:172] (0xc002e21550) Reply frame received for 3 I0316 21:07:12.759272 6 log.go:172] (0xc002e21550) (0xc001c0cfa0) Create stream I0316 21:07:12.759281 6 log.go:172] (0xc002e21550) (0xc001c0cfa0) Stream added, broadcasting: 5 I0316 21:07:12.760055 6 log.go:172] (0xc002e21550) Reply frame received for 5 I0316 21:07:12.839941 6 log.go:172] (0xc002e21550) Data frame received for 5 I0316 21:07:12.840010 6 log.go:172] (0xc001c0cfa0) (5) Data frame handling I0316 21:07:12.840043 6 log.go:172] (0xc002e21550) Data frame received for 3 I0316 21:07:12.840060 6 log.go:172] (0xc001c0ce60) (3) Data frame handling I0316 21:07:12.840091 6 log.go:172] (0xc001c0ce60) (3) Data frame sent I0316 21:07:12.840108 6 log.go:172] (0xc002e21550) Data frame received for 3 I0316 21:07:12.840124 6 log.go:172] (0xc001c0ce60) (3) Data frame handling I0316 21:07:12.841891 6 log.go:172] (0xc002e21550) Data frame received for 1 I0316 21:07:12.841920 6 log.go:172] (0xc001b48280) (1) Data frame handling I0316 21:07:12.841950 6 log.go:172] (0xc001b48280) (1) Data frame sent I0316 21:07:12.841987 6 log.go:172] (0xc002e21550) (0xc001b48280) Stream removed, broadcasting: 1 I0316 21:07:12.842013 6 log.go:172] (0xc002e21550) Go away received I0316 21:07:12.842110 6 log.go:172] (0xc002e21550) (0xc001b48280) Stream removed, broadcasting: 1 I0316 21:07:12.842139 6 log.go:172] (0xc002e21550) (0xc001c0ce60) Stream removed, broadcasting: 3 I0316 21:07:12.842158 6 log.go:172] (0xc002e21550) (0xc001c0cfa0) Stream removed, broadcasting: 5 Mar 16 21:07:12.842: INFO: Exec stderr: "" Mar 16 21:07:12.842: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4180 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:07:12.842: INFO: >>> kubeConfig: /root/.kube/config I0316 21:07:12.903862 6 log.go:172] (0xc002a92b00) (0xc001c0d2c0) Create stream I0316 21:07:12.903890 6 log.go:172] (0xc002a92b00) (0xc001c0d2c0) Stream added, broadcasting: 1 I0316 21:07:12.906721 6 log.go:172] (0xc002a92b00) Reply frame received for 1 I0316 21:07:12.906751 6 log.go:172] (0xc002a92b00) (0xc001ad8000) Create stream I0316 21:07:12.906764 6 log.go:172] (0xc002a92b00) (0xc001ad8000) Stream added, broadcasting: 3 I0316 21:07:12.907855 6 log.go:172] (0xc002a92b00) Reply frame received for 3 I0316 21:07:12.907898 6 log.go:172] (0xc002a92b00) (0xc001b483c0) Create stream I0316 21:07:12.907907 6 log.go:172] (0xc002a92b00) (0xc001b483c0) Stream added, broadcasting: 5 I0316 21:07:12.908777 6 log.go:172] (0xc002a92b00) Reply frame received for 5 I0316 21:07:12.968313 6 log.go:172] (0xc002a92b00) Data frame received for 3 I0316 21:07:12.968339 6 log.go:172] (0xc001ad8000) (3) Data frame handling I0316 21:07:12.968353 6 log.go:172] (0xc001ad8000) (3) Data frame sent I0316 21:07:12.968383 6 log.go:172] (0xc002a92b00) Data frame received for 5 I0316 21:07:12.968419 6 log.go:172] (0xc001b483c0) (5) Data frame handling I0316 21:07:12.968446 6 log.go:172] (0xc002a92b00) Data frame received for 3 I0316 21:07:12.968473 6 log.go:172] (0xc001ad8000) (3) Data frame handling I0316 21:07:12.970104 6 log.go:172] (0xc002a92b00) Data frame received for 1 I0316 21:07:12.970124 6 log.go:172] (0xc001c0d2c0) (1) Data frame handling I0316 21:07:12.970140 6 log.go:172] (0xc001c0d2c0) (1) Data frame sent I0316 21:07:12.970162 6 log.go:172] (0xc002a92b00) (0xc001c0d2c0) Stream removed, broadcasting: 1 I0316 21:07:12.970243 6 log.go:172] (0xc002a92b00) (0xc001c0d2c0) Stream removed, broadcasting: 1 I0316 21:07:12.970256 6 log.go:172] (0xc002a92b00) (0xc001ad8000) Stream removed, broadcasting: 3 I0316 21:07:12.970328 6 log.go:172] (0xc002a92b00) Go away received I0316 21:07:12.970366 6 log.go:172] (0xc002a92b00) (0xc001b483c0) Stream removed, broadcasting: 5 Mar 16 21:07:12.970: INFO: Exec stderr: "" Mar 16 21:07:12.970: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4180 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:07:12.970: INFO: >>> kubeConfig: /root/.kube/config I0316 21:07:13.001266 6 log.go:172] (0xc002a93130) (0xc001c0d540) Create stream I0316 21:07:13.001291 6 log.go:172] (0xc002a93130) (0xc001c0d540) Stream added, broadcasting: 1 I0316 21:07:13.008344 6 log.go:172] (0xc002a93130) Reply frame received for 1 I0316 21:07:13.008411 6 log.go:172] (0xc002a93130) (0xc001c0d680) Create stream I0316 21:07:13.008430 6 log.go:172] (0xc002a93130) (0xc001c0d680) Stream added, broadcasting: 3 I0316 21:07:13.009558 6 log.go:172] (0xc002a93130) Reply frame received for 3 I0316 21:07:13.009593 6 log.go:172] (0xc002a93130) (0xc001b48460) Create stream I0316 21:07:13.009607 6 log.go:172] (0xc002a93130) (0xc001b48460) Stream added, broadcasting: 5 I0316 21:07:13.010643 6 log.go:172] (0xc002a93130) Reply frame received for 5 I0316 21:07:13.057619 6 log.go:172] (0xc002a93130) Data frame received for 3 I0316 21:07:13.057674 6 log.go:172] (0xc001c0d680) (3) Data frame handling I0316 21:07:13.057697 6 log.go:172] (0xc001c0d680) (3) Data frame sent I0316 21:07:13.057712 6 log.go:172] (0xc002a93130) Data frame received for 3 I0316 21:07:13.057730 6 log.go:172] (0xc001c0d680) (3) Data frame handling I0316 21:07:13.057757 6 log.go:172] (0xc002a93130) Data frame received for 5 I0316 21:07:13.057785 6 log.go:172] (0xc001b48460) (5) Data frame handling I0316 21:07:13.059176 6 log.go:172] (0xc002a93130) Data frame received for 1 I0316 21:07:13.059199 6 log.go:172] (0xc001c0d540) (1) Data frame handling I0316 21:07:13.059214 6 log.go:172] (0xc001c0d540) (1) Data frame sent I0316 21:07:13.059224 6 log.go:172] (0xc002a93130) (0xc001c0d540) Stream removed, broadcasting: 1 I0316 21:07:13.059237 6 log.go:172] (0xc002a93130) Go away received I0316 21:07:13.059400 6 log.go:172] (0xc002a93130) (0xc001c0d540) Stream removed, broadcasting: 1 I0316 21:07:13.059423 6 log.go:172] (0xc002a93130) (0xc001c0d680) Stream removed, broadcasting: 3 I0316 21:07:13.059436 6 log.go:172] (0xc002a93130) (0xc001b48460) Stream removed, broadcasting: 5 Mar 16 21:07:13.059: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 16 21:07:13.059: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4180 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:07:13.059: INFO: >>> kubeConfig: /root/.kube/config I0316 21:07:13.094428 6 log.go:172] (0xc002b193f0) (0xc001ad8460) Create stream I0316 21:07:13.094461 6 log.go:172] (0xc002b193f0) (0xc001ad8460) Stream added, broadcasting: 1 I0316 21:07:13.097225 6 log.go:172] (0xc002b193f0) Reply frame received for 1 I0316 21:07:13.097285 6 log.go:172] (0xc002b193f0) (0xc001ad8500) Create stream I0316 21:07:13.097303 6 log.go:172] (0xc002b193f0) (0xc001ad8500) Stream added, broadcasting: 3 I0316 21:07:13.098414 6 log.go:172] (0xc002b193f0) Reply frame received for 3 I0316 21:07:13.098458 6 log.go:172] (0xc002b193f0) (0xc001b48500) Create stream I0316 21:07:13.098474 6 log.go:172] (0xc002b193f0) (0xc001b48500) Stream added, broadcasting: 5 I0316 21:07:13.099407 6 log.go:172] (0xc002b193f0) Reply frame received for 5 I0316 21:07:13.157449 6 log.go:172] (0xc002b193f0) Data frame received for 5 I0316 21:07:13.157501 6 log.go:172] (0xc001b48500) (5) Data frame handling I0316 21:07:13.157548 6 log.go:172] (0xc002b193f0) Data frame received for 3 I0316 21:07:13.157590 6 log.go:172] (0xc001ad8500) (3) Data frame handling I0316 21:07:13.157617 6 log.go:172] (0xc001ad8500) (3) Data frame sent I0316 21:07:13.157659 6 log.go:172] (0xc002b193f0) Data frame received for 3 I0316 21:07:13.157690 6 log.go:172] (0xc001ad8500) (3) Data frame handling I0316 21:07:13.159377 6 log.go:172] (0xc002b193f0) Data frame received for 1 I0316 21:07:13.159406 6 log.go:172] (0xc001ad8460) (1) Data frame handling I0316 21:07:13.159431 6 log.go:172] (0xc001ad8460) (1) Data frame sent I0316 21:07:13.159452 6 log.go:172] (0xc002b193f0) (0xc001ad8460) Stream removed, broadcasting: 1 I0316 21:07:13.159474 6 log.go:172] (0xc002b193f0) Go away received I0316 21:07:13.159610 6 log.go:172] (0xc002b193f0) (0xc001ad8460) Stream removed, broadcasting: 1 I0316 21:07:13.159635 6 log.go:172] (0xc002b193f0) (0xc001ad8500) Stream removed, broadcasting: 3 I0316 21:07:13.159653 6 log.go:172] (0xc002b193f0) (0xc001b48500) Stream removed, broadcasting: 5 Mar 16 21:07:13.159: INFO: Exec stderr: "" Mar 16 21:07:13.159: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4180 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:07:13.159: INFO: >>> kubeConfig: /root/.kube/config I0316 21:07:13.195821 6 log.go:172] (0xc002b19a20) (0xc001ad8a00) Create stream I0316 21:07:13.195851 6 log.go:172] (0xc002b19a20) (0xc001ad8a00) Stream added, broadcasting: 1 I0316 21:07:13.198473 6 log.go:172] (0xc002b19a20) Reply frame received for 1 I0316 21:07:13.198521 6 log.go:172] (0xc002b19a20) (0xc001c0d7c0) Create stream I0316 21:07:13.198536 6 log.go:172] (0xc002b19a20) (0xc001c0d7c0) Stream added, broadcasting: 3 I0316 21:07:13.199551 6 log.go:172] (0xc002b19a20) Reply frame received for 3 I0316 21:07:13.199600 6 log.go:172] (0xc002b19a20) (0xc001b485a0) Create stream I0316 21:07:13.199615 6 log.go:172] (0xc002b19a20) (0xc001b485a0) Stream added, broadcasting: 5 I0316 21:07:13.200796 6 log.go:172] (0xc002b19a20) Reply frame received for 5 I0316 21:07:13.254734 6 log.go:172] (0xc002b19a20) Data frame received for 3 I0316 21:07:13.254777 6 log.go:172] (0xc001c0d7c0) (3) Data frame handling I0316 21:07:13.254788 6 log.go:172] (0xc001c0d7c0) (3) Data frame sent I0316 21:07:13.254796 6 log.go:172] (0xc002b19a20) Data frame received for 3 I0316 21:07:13.254810 6 log.go:172] (0xc001c0d7c0) (3) Data frame handling I0316 21:07:13.254832 6 log.go:172] (0xc002b19a20) Data frame received for 5 I0316 21:07:13.254842 6 log.go:172] (0xc001b485a0) (5) Data frame handling I0316 21:07:13.256103 6 log.go:172] (0xc002b19a20) Data frame received for 1 I0316 21:07:13.256121 6 log.go:172] (0xc001ad8a00) (1) Data frame handling I0316 21:07:13.256137 6 log.go:172] (0xc001ad8a00) (1) Data frame sent I0316 21:07:13.256157 6 log.go:172] (0xc002b19a20) (0xc001ad8a00) Stream removed, broadcasting: 1 I0316 21:07:13.256191 6 log.go:172] (0xc002b19a20) Go away received I0316 21:07:13.256242 6 log.go:172] (0xc002b19a20) (0xc001ad8a00) Stream removed, broadcasting: 1 I0316 21:07:13.256261 6 log.go:172] (0xc002b19a20) (0xc001c0d7c0) Stream removed, broadcasting: 3 I0316 21:07:13.256281 6 log.go:172] (0xc002b19a20) (0xc001b485a0) Stream removed, broadcasting: 5 Mar 16 21:07:13.256: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 16 21:07:13.256: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4180 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:07:13.256: INFO: >>> kubeConfig: /root/.kube/config I0316 21:07:13.292035 6 log.go:172] (0xc0018d80b0) (0xc001ad8d20) Create stream I0316 21:07:13.292059 6 log.go:172] (0xc0018d80b0) (0xc001ad8d20) Stream added, broadcasting: 1 I0316 21:07:13.295008 6 log.go:172] (0xc0018d80b0) Reply frame received for 1 I0316 21:07:13.295048 6 log.go:172] (0xc0018d80b0) (0xc002a89220) Create stream I0316 21:07:13.295055 6 log.go:172] (0xc0018d80b0) (0xc002a89220) Stream added, broadcasting: 3 I0316 21:07:13.295998 6 log.go:172] (0xc0018d80b0) Reply frame received for 3 I0316 21:07:13.296044 6 log.go:172] (0xc0018d80b0) (0xc002a892c0) Create stream I0316 21:07:13.296062 6 log.go:172] (0xc0018d80b0) (0xc002a892c0) Stream added, broadcasting: 5 I0316 21:07:13.296963 6 log.go:172] (0xc0018d80b0) Reply frame received for 5 I0316 21:07:13.347379 6 log.go:172] (0xc0018d80b0) Data frame received for 5 I0316 21:07:13.347418 6 log.go:172] (0xc002a892c0) (5) Data frame handling I0316 21:07:13.347493 6 log.go:172] (0xc0018d80b0) Data frame received for 3 I0316 21:07:13.347557 6 log.go:172] (0xc002a89220) (3) Data frame handling I0316 21:07:13.347591 6 log.go:172] (0xc002a89220) (3) Data frame sent I0316 21:07:13.347613 6 log.go:172] (0xc0018d80b0) Data frame received for 3 I0316 21:07:13.347631 6 log.go:172] (0xc002a89220) (3) Data frame handling I0316 21:07:13.348821 6 log.go:172] (0xc0018d80b0) Data frame received for 1 I0316 21:07:13.348863 6 log.go:172] (0xc001ad8d20) (1) Data frame handling I0316 21:07:13.348909 6 log.go:172] (0xc001ad8d20) (1) Data frame sent I0316 21:07:13.349033 6 log.go:172] (0xc0018d80b0) (0xc001ad8d20) Stream removed, broadcasting: 1 I0316 21:07:13.349079 6 log.go:172] (0xc0018d80b0) Go away received I0316 21:07:13.349301 6 log.go:172] (0xc0018d80b0) (0xc001ad8d20) Stream removed, broadcasting: 1 I0316 21:07:13.349336 6 log.go:172] (0xc0018d80b0) (0xc002a89220) Stream removed, broadcasting: 3 I0316 21:07:13.349353 6 log.go:172] (0xc0018d80b0) (0xc002a892c0) Stream removed, broadcasting: 5 Mar 16 21:07:13.349: INFO: Exec stderr: "" Mar 16 21:07:13.349: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4180 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:07:13.349: INFO: >>> kubeConfig: /root/.kube/config I0316 21:07:13.387233 6 log.go:172] (0xc0018d86e0) (0xc001ad8f00) Create stream I0316 21:07:13.387264 6 log.go:172] (0xc0018d86e0) (0xc001ad8f00) Stream added, broadcasting: 1 I0316 21:07:13.390424 6 log.go:172] (0xc0018d86e0) Reply frame received for 1 I0316 21:07:13.390467 6 log.go:172] (0xc0018d86e0) (0xc001ad8fa0) Create stream I0316 21:07:13.390484 6 log.go:172] (0xc0018d86e0) (0xc001ad8fa0) Stream added, broadcasting: 3 I0316 21:07:13.391513 6 log.go:172] (0xc0018d86e0) Reply frame received for 3 I0316 21:07:13.391549 6 log.go:172] (0xc0018d86e0) (0xc001ad9040) Create stream I0316 21:07:13.391563 6 log.go:172] (0xc0018d86e0) (0xc001ad9040) Stream added, broadcasting: 5 I0316 21:07:13.392656 6 log.go:172] (0xc0018d86e0) Reply frame received for 5 I0316 21:07:13.452160 6 log.go:172] (0xc0018d86e0) Data frame received for 5 I0316 21:07:13.452208 6 log.go:172] (0xc0018d86e0) Data frame received for 3 I0316 21:07:13.452250 6 log.go:172] (0xc001ad8fa0) (3) Data frame handling I0316 21:07:13.452268 6 log.go:172] (0xc001ad8fa0) (3) Data frame sent I0316 21:07:13.452293 6 log.go:172] (0xc0018d86e0) Data frame received for 3 I0316 21:07:13.452305 6 log.go:172] (0xc001ad8fa0) (3) Data frame handling I0316 21:07:13.452341 6 log.go:172] (0xc001ad9040) (5) Data frame handling I0316 21:07:13.454086 6 log.go:172] (0xc0018d86e0) Data frame received for 1 I0316 21:07:13.454107 6 log.go:172] (0xc001ad8f00) (1) Data frame handling I0316 21:07:13.454131 6 log.go:172] (0xc001ad8f00) (1) Data frame sent I0316 21:07:13.454172 6 log.go:172] (0xc0018d86e0) (0xc001ad8f00) Stream removed, broadcasting: 1 I0316 21:07:13.454235 6 log.go:172] (0xc0018d86e0) Go away received I0316 21:07:13.454299 6 log.go:172] (0xc0018d86e0) (0xc001ad8f00) Stream removed, broadcasting: 1 I0316 21:07:13.454326 6 log.go:172] (0xc0018d86e0) (0xc001ad8fa0) Stream removed, broadcasting: 3 I0316 21:07:13.454361 6 log.go:172] (0xc0018d86e0) (0xc001ad9040) Stream removed, broadcasting: 5 Mar 16 21:07:13.454: INFO: Exec stderr: "" Mar 16 21:07:13.454: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4180 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:07:13.454: INFO: >>> kubeConfig: /root/.kube/config I0316 21:07:13.489562 6 log.go:172] (0xc002a93760) (0xc001c0dae0) Create stream I0316 21:07:13.489606 6 log.go:172] (0xc002a93760) (0xc001c0dae0) Stream added, broadcasting: 1 I0316 21:07:13.492431 6 log.go:172] (0xc002a93760) Reply frame received for 1 I0316 21:07:13.492471 6 log.go:172] (0xc002a93760) (0xc001b48640) Create stream I0316 21:07:13.492481 6 log.go:172] (0xc002a93760) (0xc001b48640) Stream added, broadcasting: 3 I0316 21:07:13.493698 6 log.go:172] (0xc002a93760) Reply frame received for 3 I0316 21:07:13.493739 6 log.go:172] (0xc002a93760) (0xc001ad9220) Create stream I0316 21:07:13.493754 6 log.go:172] (0xc002a93760) (0xc001ad9220) Stream added, broadcasting: 5 I0316 21:07:13.494730 6 log.go:172] (0xc002a93760) Reply frame received for 5 I0316 21:07:13.546494 6 log.go:172] (0xc002a93760) Data frame received for 3 I0316 21:07:13.546558 6 log.go:172] (0xc001b48640) (3) Data frame handling I0316 21:07:13.546612 6 log.go:172] (0xc001b48640) (3) Data frame sent I0316 21:07:13.546640 6 log.go:172] (0xc002a93760) Data frame received for 3 I0316 21:07:13.546660 6 log.go:172] (0xc001b48640) (3) Data frame handling I0316 21:07:13.546682 6 log.go:172] (0xc002a93760) Data frame received for 5 I0316 21:07:13.546701 6 log.go:172] (0xc001ad9220) (5) Data frame handling I0316 21:07:13.548323 6 log.go:172] (0xc002a93760) Data frame received for 1 I0316 21:07:13.548350 6 log.go:172] (0xc001c0dae0) (1) Data frame handling I0316 21:07:13.548371 6 log.go:172] (0xc001c0dae0) (1) Data frame sent I0316 21:07:13.548510 6 log.go:172] (0xc002a93760) (0xc001c0dae0) Stream removed, broadcasting: 1 I0316 21:07:13.548569 6 log.go:172] (0xc002a93760) Go away received I0316 21:07:13.548641 6 log.go:172] (0xc002a93760) (0xc001c0dae0) Stream removed, broadcasting: 1 I0316 21:07:13.548666 6 log.go:172] (0xc002a93760) (0xc001b48640) Stream removed, broadcasting: 3 I0316 21:07:13.548681 6 log.go:172] (0xc002a93760) (0xc001ad9220) Stream removed, broadcasting: 5 Mar 16 21:07:13.548: INFO: Exec stderr: "" Mar 16 21:07:13.548: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4180 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:07:13.548: INFO: >>> kubeConfig: /root/.kube/config I0316 21:07:13.586797 6 log.go:172] (0xc002a93d90) (0xc001c0dd60) Create stream I0316 21:07:13.586830 6 log.go:172] (0xc002a93d90) (0xc001c0dd60) Stream added, broadcasting: 1 I0316 21:07:13.589549 6 log.go:172] (0xc002a93d90) Reply frame received for 1 I0316 21:07:13.589588 6 log.go:172] (0xc002a93d90) (0xc001ad92c0) Create stream I0316 21:07:13.589603 6 log.go:172] (0xc002a93d90) (0xc001ad92c0) Stream added, broadcasting: 3 I0316 21:07:13.590616 6 log.go:172] (0xc002a93d90) Reply frame received for 3 I0316 21:07:13.590656 6 log.go:172] (0xc002a93d90) (0xc001c4e140) Create stream I0316 21:07:13.590668 6 log.go:172] (0xc002a93d90) (0xc001c4e140) Stream added, broadcasting: 5 I0316 21:07:13.591518 6 log.go:172] (0xc002a93d90) Reply frame received for 5 I0316 21:07:13.649665 6 log.go:172] (0xc002a93d90) Data frame received for 5 I0316 21:07:13.649707 6 log.go:172] (0xc001c4e140) (5) Data frame handling I0316 21:07:13.649733 6 log.go:172] (0xc002a93d90) Data frame received for 3 I0316 21:07:13.649748 6 log.go:172] (0xc001ad92c0) (3) Data frame handling I0316 21:07:13.649763 6 log.go:172] (0xc001ad92c0) (3) Data frame sent I0316 21:07:13.649776 6 log.go:172] (0xc002a93d90) Data frame received for 3 I0316 21:07:13.649788 6 log.go:172] (0xc001ad92c0) (3) Data frame handling I0316 21:07:13.651355 6 log.go:172] (0xc002a93d90) Data frame received for 1 I0316 21:07:13.651373 6 log.go:172] (0xc001c0dd60) (1) Data frame handling I0316 21:07:13.651391 6 log.go:172] (0xc001c0dd60) (1) Data frame sent I0316 21:07:13.651407 6 log.go:172] (0xc002a93d90) (0xc001c0dd60) Stream removed, broadcasting: 1 I0316 21:07:13.651434 6 log.go:172] (0xc002a93d90) Go away received I0316 21:07:13.651523 6 log.go:172] (0xc002a93d90) (0xc001c0dd60) Stream removed, broadcasting: 1 I0316 21:07:13.651538 6 log.go:172] (0xc002a93d90) (0xc001ad92c0) Stream removed, broadcasting: 3 I0316 21:07:13.651546 6 log.go:172] (0xc002a93d90) (0xc001c4e140) Stream removed, broadcasting: 5 Mar 16 21:07:13.651: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:07:13.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4180" for this suite. • [SLOW TEST:11.175 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:07:13.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-c2c9098f-2560-40b8-bc3a-7bc06f415a92 in namespace container-probe-5363 Mar 16 21:07:17.740: INFO: Started pod test-webserver-c2c9098f-2560-40b8-bc3a-7bc06f415a92 in namespace container-probe-5363 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 21:07:17.743: INFO: Initial restart count of pod test-webserver-c2c9098f-2560-40b8-bc3a-7bc06f415a92 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:11:18.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5363" for this suite. • [SLOW TEST:244.674 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:11:18.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-620/configmap-test-589e3011-0bd7-4e00-bda3-2d2d80130e15 STEP: Creating a pod to test consume configMaps Mar 16 21:11:18.681: INFO: Waiting up to 5m0s for pod "pod-configmaps-66f714d3-0f91-42e4-8791-d1505f543b98" in namespace "configmap-620" to be "success or failure" Mar 16 21:11:18.709: INFO: Pod "pod-configmaps-66f714d3-0f91-42e4-8791-d1505f543b98": Phase="Pending", Reason="", readiness=false. Elapsed: 27.417817ms Mar 16 21:11:20.714: INFO: Pod "pod-configmaps-66f714d3-0f91-42e4-8791-d1505f543b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032136166s Mar 16 21:11:22.718: INFO: Pod "pod-configmaps-66f714d3-0f91-42e4-8791-d1505f543b98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036708979s STEP: Saw pod success Mar 16 21:11:22.718: INFO: Pod "pod-configmaps-66f714d3-0f91-42e4-8791-d1505f543b98" satisfied condition "success or failure" Mar 16 21:11:22.721: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-66f714d3-0f91-42e4-8791-d1505f543b98 container env-test: STEP: delete the pod Mar 16 21:11:22.752: INFO: Waiting for pod pod-configmaps-66f714d3-0f91-42e4-8791-d1505f543b98 to disappear Mar 16 21:11:22.756: INFO: Pod pod-configmaps-66f714d3-0f91-42e4-8791-d1505f543b98 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:11:22.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-620" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":78,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:11:22.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-11609ab1-8449-4778-bb23-43e6dd8c6d0e STEP: Creating a pod to test consume configMaps Mar 16 21:11:22.854: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec2c0caf-0600-4874-8554-f47fd2ffee35" in namespace "configmap-1129" to be "success or failure" Mar 16 21:11:22.882: INFO: Pod "pod-configmaps-ec2c0caf-0600-4874-8554-f47fd2ffee35": Phase="Pending", Reason="", readiness=false. Elapsed: 27.920882ms Mar 16 21:11:24.886: INFO: Pod "pod-configmaps-ec2c0caf-0600-4874-8554-f47fd2ffee35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031665942s Mar 16 21:11:26.890: INFO: Pod "pod-configmaps-ec2c0caf-0600-4874-8554-f47fd2ffee35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035770989s STEP: Saw pod success Mar 16 21:11:26.890: INFO: Pod "pod-configmaps-ec2c0caf-0600-4874-8554-f47fd2ffee35" satisfied condition "success or failure" Mar 16 21:11:26.894: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ec2c0caf-0600-4874-8554-f47fd2ffee35 container configmap-volume-test: STEP: delete the pod Mar 16 21:11:26.946: INFO: Waiting for pod pod-configmaps-ec2c0caf-0600-4874-8554-f47fd2ffee35 to disappear Mar 16 21:11:26.951: INFO: Pod pod-configmaps-ec2c0caf-0600-4874-8554-f47fd2ffee35 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:11:26.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1129" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":127,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:11:26.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 21:11:27.915: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 21:11:29.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719989887, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719989887, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719989887, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719989887, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:11:32.958: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:11:32.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8549-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:11:33.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1870" for this suite. STEP: Destroying namespace "webhook-1870-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.795 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":6,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:11:33.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-71177c96-5809-434c-b269-1502c76e4508 STEP: Creating a pod to test consume secrets Mar 16 21:11:33.883: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7d6f41a7-3a24-4576-843e-dcb23e5eafff" in namespace "projected-6834" to be "success or failure" Mar 16 21:11:33.886: INFO: Pod "pod-projected-secrets-7d6f41a7-3a24-4576-843e-dcb23e5eafff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.626692ms Mar 16 21:11:35.890: INFO: Pod "pod-projected-secrets-7d6f41a7-3a24-4576-843e-dcb23e5eafff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006584467s Mar 16 21:11:37.894: INFO: Pod "pod-projected-secrets-7d6f41a7-3a24-4576-843e-dcb23e5eafff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011153496s STEP: Saw pod success Mar 16 21:11:37.894: INFO: Pod "pod-projected-secrets-7d6f41a7-3a24-4576-843e-dcb23e5eafff" satisfied condition "success or failure" Mar 16 21:11:37.897: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-7d6f41a7-3a24-4576-843e-dcb23e5eafff container projected-secret-volume-test: STEP: delete the pod Mar 16 21:11:37.914: INFO: Waiting for pod pod-projected-secrets-7d6f41a7-3a24-4576-843e-dcb23e5eafff to disappear Mar 16 21:11:37.918: INFO: Pod pod-projected-secrets-7d6f41a7-3a24-4576-843e-dcb23e5eafff no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:11:37.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6834" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:11:37.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-8899 STEP: creating replication controller nodeport-test in namespace services-8899 I0316 21:11:38.043376 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-8899, replica count: 2 I0316 21:11:41.093811 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 21:11:44.094085 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 21:11:44.094: INFO: Creating new exec pod Mar 16 21:11:49.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8899 execpodzpvbh -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 16 21:11:51.312: INFO: stderr: "I0316 21:11:51.221493 51 log.go:172] (0xc0009bcbb0) (0xc00065de00) Create stream\nI0316 21:11:51.221546 51 log.go:172] (0xc0009bcbb0) (0xc00065de00) Stream added, broadcasting: 1\nI0316 21:11:51.224251 51 log.go:172] (0xc0009bcbb0) Reply frame received for 1\nI0316 21:11:51.224295 51 log.go:172] (0xc0009bcbb0) (0xc0005f85a0) Create stream\nI0316 21:11:51.224305 51 log.go:172] (0xc0009bcbb0) (0xc0005f85a0) Stream added, broadcasting: 3\nI0316 21:11:51.225519 51 log.go:172] (0xc0009bcbb0) Reply frame received for 3\nI0316 21:11:51.225564 51 log.go:172] (0xc0009bcbb0) (0xc000737360) Create stream\nI0316 21:11:51.225580 51 log.go:172] (0xc0009bcbb0) (0xc000737360) Stream added, broadcasting: 5\nI0316 21:11:51.226562 51 log.go:172] (0xc0009bcbb0) Reply frame received for 5\nI0316 21:11:51.304781 51 log.go:172] (0xc0009bcbb0) Data frame received for 5\nI0316 21:11:51.304827 51 log.go:172] (0xc000737360) (5) Data frame handling\nI0316 21:11:51.304866 51 log.go:172] (0xc000737360) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0316 21:11:51.305614 51 log.go:172] (0xc0009bcbb0) Data frame received for 5\nI0316 21:11:51.305654 51 log.go:172] (0xc000737360) (5) Data frame handling\nI0316 21:11:51.305685 51 log.go:172] (0xc000737360) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0316 21:11:51.306207 51 log.go:172] (0xc0009bcbb0) Data frame received for 3\nI0316 21:11:51.306255 51 log.go:172] (0xc0005f85a0) (3) Data frame handling\nI0316 21:11:51.306292 51 log.go:172] (0xc0009bcbb0) Data frame received for 5\nI0316 21:11:51.306320 51 log.go:172] (0xc000737360) (5) Data frame handling\nI0316 21:11:51.308258 51 log.go:172] (0xc0009bcbb0) Data frame received for 1\nI0316 21:11:51.308283 51 log.go:172] (0xc00065de00) (1) Data frame handling\nI0316 21:11:51.308294 51 log.go:172] (0xc00065de00) (1) Data frame sent\nI0316 21:11:51.308305 51 log.go:172] (0xc0009bcbb0) (0xc00065de00) Stream removed, broadcasting: 1\nI0316 21:11:51.308516 51 log.go:172] (0xc0009bcbb0) Go away received\nI0316 21:11:51.308624 51 log.go:172] (0xc0009bcbb0) (0xc00065de00) Stream removed, broadcasting: 1\nI0316 21:11:51.308640 51 log.go:172] (0xc0009bcbb0) (0xc0005f85a0) Stream removed, broadcasting: 3\nI0316 21:11:51.308648 51 log.go:172] (0xc0009bcbb0) (0xc000737360) Stream removed, broadcasting: 5\n" Mar 16 21:11:51.312: INFO: stdout: "" Mar 16 21:11:51.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8899 execpodzpvbh -- /bin/sh -x -c nc -zv -t -w 2 10.99.117.75 80' Mar 16 21:11:51.501: INFO: stderr: "I0316 21:11:51.437038 86 log.go:172] (0xc000b7e160) (0xc000b740a0) Create stream\nI0316 21:11:51.437243 86 log.go:172] (0xc000b7e160) (0xc000b740a0) Stream added, broadcasting: 1\nI0316 21:11:51.440807 86 log.go:172] (0xc000b7e160) Reply frame received for 1\nI0316 21:11:51.440854 86 log.go:172] (0xc000b7e160) (0xc000a9a000) Create stream\nI0316 21:11:51.440867 86 log.go:172] (0xc000b7e160) (0xc000a9a000) Stream added, broadcasting: 3\nI0316 21:11:51.442290 86 log.go:172] (0xc000b7e160) Reply frame received for 3\nI0316 21:11:51.442318 86 log.go:172] (0xc000b7e160) (0xc000b74140) Create stream\nI0316 21:11:51.442326 86 log.go:172] (0xc000b7e160) (0xc000b74140) Stream added, broadcasting: 5\nI0316 21:11:51.443342 86 log.go:172] (0xc000b7e160) Reply frame received for 5\nI0316 21:11:51.495597 86 log.go:172] (0xc000b7e160) Data frame received for 5\nI0316 21:11:51.495645 86 log.go:172] (0xc000b74140) (5) Data frame handling\nI0316 21:11:51.495664 86 log.go:172] (0xc000b74140) (5) Data frame sent\nI0316 21:11:51.495676 86 log.go:172] (0xc000b7e160) Data frame received for 5\nI0316 21:11:51.495686 86 log.go:172] (0xc000b74140) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.117.75 80\nConnection to 10.99.117.75 80 port [tcp/http] succeeded!\nI0316 21:11:51.495720 86 log.go:172] (0xc000b7e160) Data frame received for 3\nI0316 21:11:51.495747 86 log.go:172] (0xc000a9a000) (3) Data frame handling\nI0316 21:11:51.496853 86 log.go:172] (0xc000b7e160) Data frame received for 1\nI0316 21:11:51.496869 86 log.go:172] (0xc000b740a0) (1) Data frame handling\nI0316 21:11:51.496877 86 log.go:172] (0xc000b740a0) (1) Data frame sent\nI0316 21:11:51.496901 86 log.go:172] (0xc000b7e160) (0xc000b740a0) Stream removed, broadcasting: 1\nI0316 21:11:51.496941 86 log.go:172] (0xc000b7e160) Go away received\nI0316 21:11:51.497856 86 log.go:172] (0xc000b7e160) (0xc000b740a0) Stream removed, broadcasting: 1\nI0316 21:11:51.497968 86 log.go:172] (0xc000b7e160) (0xc000a9a000) Stream removed, broadcasting: 3\nI0316 21:11:51.498048 86 log.go:172] (0xc000b7e160) (0xc000b74140) Stream removed, broadcasting: 5\n" Mar 16 21:11:51.501: INFO: stdout: "" Mar 16 21:11:51.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8899 execpodzpvbh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30021' Mar 16 21:11:51.711: INFO: stderr: "I0316 21:11:51.630905 108 log.go:172] (0xc00094a630) (0xc0009080a0) Create stream\nI0316 21:11:51.630954 108 log.go:172] (0xc00094a630) (0xc0009080a0) Stream added, broadcasting: 1\nI0316 21:11:51.634000 108 log.go:172] (0xc00094a630) Reply frame received for 1\nI0316 21:11:51.634028 108 log.go:172] (0xc00094a630) (0xc000908140) Create stream\nI0316 21:11:51.634035 108 log.go:172] (0xc00094a630) (0xc000908140) Stream added, broadcasting: 3\nI0316 21:11:51.634923 108 log.go:172] (0xc00094a630) Reply frame received for 3\nI0316 21:11:51.634954 108 log.go:172] (0xc00094a630) (0xc000a1a000) Create stream\nI0316 21:11:51.634965 108 log.go:172] (0xc00094a630) (0xc000a1a000) Stream added, broadcasting: 5\nI0316 21:11:51.635952 108 log.go:172] (0xc00094a630) Reply frame received for 5\nI0316 21:11:51.705723 108 log.go:172] (0xc00094a630) Data frame received for 3\nI0316 21:11:51.705869 108 log.go:172] (0xc000908140) (3) Data frame handling\nI0316 21:11:51.705912 108 log.go:172] (0xc00094a630) Data frame received for 5\nI0316 21:11:51.705926 108 log.go:172] (0xc000a1a000) (5) Data frame handling\nI0316 21:11:51.705944 108 log.go:172] (0xc000a1a000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30021\nConnection to 172.17.0.10 30021 port [tcp/30021] succeeded!\nI0316 21:11:51.705962 108 log.go:172] (0xc00094a630) Data frame received for 5\nI0316 21:11:51.706012 108 log.go:172] (0xc000a1a000) (5) Data frame handling\nI0316 21:11:51.707138 108 log.go:172] (0xc00094a630) Data frame received for 1\nI0316 21:11:51.707155 108 log.go:172] (0xc0009080a0) (1) Data frame handling\nI0316 21:11:51.707165 108 log.go:172] (0xc0009080a0) (1) Data frame sent\nI0316 21:11:51.707326 108 log.go:172] (0xc00094a630) (0xc0009080a0) Stream removed, broadcasting: 1\nI0316 21:11:51.707361 108 log.go:172] (0xc00094a630) Go away received\nI0316 21:11:51.707724 108 log.go:172] (0xc00094a630) (0xc0009080a0) Stream removed, broadcasting: 1\nI0316 21:11:51.707753 108 log.go:172] (0xc00094a630) (0xc000908140) Stream removed, broadcasting: 3\nI0316 21:11:51.707775 108 log.go:172] (0xc00094a630) (0xc000a1a000) Stream removed, broadcasting: 5\n" Mar 16 21:11:51.711: INFO: stdout: "" Mar 16 21:11:51.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8899 execpodzpvbh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30021' Mar 16 21:11:51.914: INFO: stderr: "I0316 21:11:51.848779 129 log.go:172] (0xc0008429a0) (0xc0009ec000) Create stream\nI0316 21:11:51.848835 129 log.go:172] (0xc0008429a0) (0xc0009ec000) Stream added, broadcasting: 1\nI0316 21:11:51.854985 129 log.go:172] (0xc0008429a0) Reply frame received for 1\nI0316 21:11:51.855065 129 log.go:172] (0xc0008429a0) (0xc0005a7b80) Create stream\nI0316 21:11:51.855099 129 log.go:172] (0xc0008429a0) (0xc0005a7b80) Stream added, broadcasting: 3\nI0316 21:11:51.856245 129 log.go:172] (0xc0008429a0) Reply frame received for 3\nI0316 21:11:51.856281 129 log.go:172] (0xc0008429a0) (0xc0009ec0a0) Create stream\nI0316 21:11:51.856292 129 log.go:172] (0xc0008429a0) (0xc0009ec0a0) Stream added, broadcasting: 5\nI0316 21:11:51.857379 129 log.go:172] (0xc0008429a0) Reply frame received for 5\nI0316 21:11:51.907238 129 log.go:172] (0xc0008429a0) Data frame received for 3\nI0316 21:11:51.907265 129 log.go:172] (0xc0005a7b80) (3) Data frame handling\nI0316 21:11:51.907304 129 log.go:172] (0xc0008429a0) Data frame received for 5\nI0316 21:11:51.907345 129 log.go:172] (0xc0009ec0a0) (5) Data frame handling\nI0316 21:11:51.907369 129 log.go:172] (0xc0009ec0a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30021\nConnection to 172.17.0.8 30021 port [tcp/30021] succeeded!\nI0316 21:11:51.907603 129 log.go:172] (0xc0008429a0) Data frame received for 5\nI0316 21:11:51.907622 129 log.go:172] (0xc0009ec0a0) (5) Data frame handling\nI0316 21:11:51.909301 129 log.go:172] (0xc0008429a0) Data frame received for 1\nI0316 21:11:51.909322 129 log.go:172] (0xc0009ec000) (1) Data frame handling\nI0316 21:11:51.909344 129 log.go:172] (0xc0009ec000) (1) Data frame sent\nI0316 21:11:51.909361 129 log.go:172] (0xc0008429a0) (0xc0009ec000) Stream removed, broadcasting: 1\nI0316 21:11:51.909535 129 log.go:172] (0xc0008429a0) Go away received\nI0316 21:11:51.909734 129 log.go:172] (0xc0008429a0) (0xc0009ec000) Stream removed, broadcasting: 1\nI0316 21:11:51.909753 129 log.go:172] (0xc0008429a0) (0xc0005a7b80) Stream removed, broadcasting: 3\nI0316 21:11:51.909766 129 log.go:172] (0xc0008429a0) (0xc0009ec0a0) Stream removed, broadcasting: 5\n" Mar 16 21:11:51.914: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:11:51.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8899" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:13.998 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":8,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:11:51.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 16 21:11:56.025: INFO: &Pod{ObjectMeta:{send-events-88449b0d-454a-483c-ae12-d49ae2a28b47 events-8214 /api/v1/namespaces/events-8214/pods/send-events-88449b0d-454a-483c-ae12-d49ae2a28b47 d085fda5-af08-4562-b9e9-f076c8daa8c1 312614 0 2020-03-16 21:11:52 +0000 UTC map[name:foo time:6092426] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v5gph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v5gph,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v5gph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:11:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:11:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:11:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:11:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.244,StartTime:2020-03-16 21:11:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 21:11:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://bc0b5509fcd777b44ef6ca299f3239dfdef9b8ff20752c61b856a2a70f80c1f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 16 21:11:58.030: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 16 21:12:00.035: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:12:00.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8214" for this suite. • [SLOW TEST:8.254 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":9,"skipped":208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:12:00.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:12:16.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9150" for this suite. • [SLOW TEST:16.131 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":10,"skipped":231,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:12:16.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9580 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9580 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9580 Mar 16 21:12:16.402: INFO: Found 0 stateful pods, waiting for 1 Mar 16 21:12:26.407: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 16 21:12:26.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9580 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 21:12:26.696: INFO: stderr: "I0316 21:12:26.569846 152 log.go:172] (0xc00095c000) (0xc0005f25a0) Create stream\nI0316 21:12:26.569921 152 log.go:172] (0xc00095c000) (0xc0005f25a0) Stream added, broadcasting: 1\nI0316 21:12:26.572873 152 log.go:172] (0xc00095c000) Reply frame received for 1\nI0316 21:12:26.572938 152 log.go:172] (0xc00095c000) (0xc000ac0000) Create stream\nI0316 21:12:26.572961 152 log.go:172] (0xc00095c000) (0xc000ac0000) Stream added, broadcasting: 3\nI0316 21:12:26.574158 152 log.go:172] (0xc00095c000) Reply frame received for 3\nI0316 21:12:26.574190 152 log.go:172] (0xc00095c000) (0xc0006859a0) Create stream\nI0316 21:12:26.574205 152 log.go:172] (0xc00095c000) (0xc0006859a0) Stream added, broadcasting: 5\nI0316 21:12:26.575066 152 log.go:172] (0xc00095c000) Reply frame received for 5\nI0316 21:12:26.659263 152 log.go:172] (0xc00095c000) Data frame received for 5\nI0316 21:12:26.659285 152 log.go:172] (0xc0006859a0) (5) Data frame handling\nI0316 21:12:26.659300 152 log.go:172] (0xc0006859a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 21:12:26.689735 152 log.go:172] (0xc00095c000) Data frame received for 5\nI0316 21:12:26.689777 152 log.go:172] (0xc0006859a0) (5) Data frame handling\nI0316 21:12:26.689805 152 log.go:172] (0xc00095c000) Data frame received for 3\nI0316 21:12:26.689819 152 log.go:172] (0xc000ac0000) (3) Data frame handling\nI0316 21:12:26.689830 152 log.go:172] (0xc000ac0000) (3) Data frame sent\nI0316 21:12:26.689900 152 log.go:172] (0xc00095c000) Data frame received for 3\nI0316 21:12:26.689906 152 log.go:172] (0xc000ac0000) (3) Data frame handling\nI0316 21:12:26.691656 152 log.go:172] (0xc00095c000) Data frame received for 1\nI0316 21:12:26.691700 152 log.go:172] (0xc0005f25a0) (1) Data frame handling\nI0316 21:12:26.691803 152 log.go:172] (0xc0005f25a0) (1) Data frame sent\nI0316 21:12:26.691832 152 log.go:172] (0xc00095c000) (0xc0005f25a0) Stream removed, broadcasting: 1\nI0316 21:12:26.691867 152 log.go:172] (0xc00095c000) Go away received\nI0316 21:12:26.692161 152 log.go:172] (0xc00095c000) (0xc0005f25a0) Stream removed, broadcasting: 1\nI0316 21:12:26.692173 152 log.go:172] (0xc00095c000) (0xc000ac0000) Stream removed, broadcasting: 3\nI0316 21:12:26.692179 152 log.go:172] (0xc00095c000) (0xc0006859a0) Stream removed, broadcasting: 5\n" Mar 16 21:12:26.696: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 21:12:26.696: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 21:12:26.700: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 16 21:12:36.704: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 21:12:36.704: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 21:12:36.727: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999585s Mar 16 21:12:37.731: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989479461s Mar 16 21:12:38.735: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.985456529s Mar 16 21:12:39.739: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981455136s Mar 16 21:12:40.754: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977263502s Mar 16 21:12:41.778: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.962478578s Mar 16 21:12:42.781: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.938399474s Mar 16 21:12:43.795: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.934841049s Mar 16 21:12:44.800: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.920760198s Mar 16 21:12:45.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 915.816194ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9580 Mar 16 21:12:46.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9580 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 21:12:47.031: INFO: stderr: "I0316 21:12:46.942992 175 log.go:172] (0xc0003ce000) (0xc0006dfa40) Create stream\nI0316 21:12:46.943046 175 log.go:172] (0xc0003ce000) (0xc0006dfa40) Stream added, broadcasting: 1\nI0316 21:12:46.945331 175 log.go:172] (0xc0003ce000) Reply frame received for 1\nI0316 21:12:46.945381 175 log.go:172] (0xc0003ce000) (0xc000396000) Create stream\nI0316 21:12:46.945396 175 log.go:172] (0xc0003ce000) (0xc000396000) Stream added, broadcasting: 3\nI0316 21:12:46.946205 175 log.go:172] (0xc0003ce000) Reply frame received for 3\nI0316 21:12:46.946243 175 log.go:172] (0xc0003ce000) (0xc000228000) Create stream\nI0316 21:12:46.946258 175 log.go:172] (0xc0003ce000) (0xc000228000) Stream added, broadcasting: 5\nI0316 21:12:46.947176 175 log.go:172] (0xc0003ce000) Reply frame received for 5\nI0316 21:12:47.024280 175 log.go:172] (0xc0003ce000) Data frame received for 5\nI0316 21:12:47.024339 175 log.go:172] (0xc000228000) (5) Data frame handling\nI0316 21:12:47.024365 175 log.go:172] (0xc000228000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 21:12:47.024397 175 log.go:172] (0xc0003ce000) Data frame received for 3\nI0316 21:12:47.024416 175 log.go:172] (0xc000396000) (3) Data frame handling\nI0316 21:12:47.024448 175 log.go:172] (0xc000396000) (3) Data frame sent\nI0316 21:12:47.024470 175 log.go:172] (0xc0003ce000) Data frame received for 5\nI0316 21:12:47.024492 175 log.go:172] (0xc000228000) (5) Data frame handling\nI0316 21:12:47.024805 175 log.go:172] (0xc0003ce000) Data frame received for 3\nI0316 21:12:47.024832 175 log.go:172] (0xc000396000) (3) Data frame handling\nI0316 21:12:47.026521 175 log.go:172] (0xc0003ce000) Data frame received for 1\nI0316 21:12:47.026563 175 log.go:172] (0xc0006dfa40) (1) Data frame handling\nI0316 21:12:47.026601 175 log.go:172] (0xc0006dfa40) (1) Data frame sent\nI0316 21:12:47.026647 175 log.go:172] (0xc0003ce000) (0xc0006dfa40) Stream removed, broadcasting: 1\nI0316 21:12:47.026668 175 log.go:172] (0xc0003ce000) Go away received\nI0316 21:12:47.027102 175 log.go:172] (0xc0003ce000) (0xc0006dfa40) Stream removed, broadcasting: 1\nI0316 21:12:47.027128 175 log.go:172] (0xc0003ce000) (0xc000396000) Stream removed, broadcasting: 3\nI0316 21:12:47.027141 175 log.go:172] (0xc0003ce000) (0xc000228000) Stream removed, broadcasting: 5\n" Mar 16 21:12:47.031: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 21:12:47.031: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 21:12:47.035: INFO: Found 1 stateful pods, waiting for 3 Mar 16 21:12:57.039: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 21:12:57.039: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 21:12:57.039: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 16 21:12:57.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9580 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 21:12:57.257: INFO: stderr: "I0316 21:12:57.165842 197 log.go:172] (0xc0009cc000) (0xc0006df400) Create stream\nI0316 21:12:57.165888 197 log.go:172] (0xc0009cc000) (0xc0006df400) Stream added, broadcasting: 1\nI0316 21:12:57.170123 197 log.go:172] (0xc0009cc000) Reply frame received for 1\nI0316 21:12:57.170168 197 log.go:172] (0xc0009cc000) (0xc000260960) Create stream\nI0316 21:12:57.170180 197 log.go:172] (0xc0009cc000) (0xc000260960) Stream added, broadcasting: 3\nI0316 21:12:57.171042 197 log.go:172] (0xc0009cc000) Reply frame received for 3\nI0316 21:12:57.171070 197 log.go:172] (0xc0009cc000) (0xc0007a0000) Create stream\nI0316 21:12:57.171078 197 log.go:172] (0xc0009cc000) (0xc0007a0000) Stream added, broadcasting: 5\nI0316 21:12:57.171890 197 log.go:172] (0xc0009cc000) Reply frame received for 5\nI0316 21:12:57.251646 197 log.go:172] (0xc0009cc000) Data frame received for 3\nI0316 21:12:57.251707 197 log.go:172] (0xc000260960) (3) Data frame handling\nI0316 21:12:57.251732 197 log.go:172] (0xc000260960) (3) Data frame sent\nI0316 21:12:57.251748 197 log.go:172] (0xc0009cc000) Data frame received for 3\nI0316 21:12:57.251761 197 log.go:172] (0xc000260960) (3) Data frame handling\nI0316 21:12:57.251788 197 log.go:172] (0xc0009cc000) Data frame received for 5\nI0316 21:12:57.251818 197 log.go:172] (0xc0007a0000) (5) Data frame handling\nI0316 21:12:57.251845 197 log.go:172] (0xc0007a0000) (5) Data frame sent\nI0316 21:12:57.251859 197 log.go:172] (0xc0009cc000) Data frame received for 5\nI0316 21:12:57.251867 197 log.go:172] (0xc0007a0000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 21:12:57.253279 197 log.go:172] (0xc0009cc000) Data frame received for 1\nI0316 21:12:57.253302 197 log.go:172] (0xc0006df400) (1) Data frame handling\nI0316 21:12:57.253317 197 log.go:172] (0xc0006df400) (1) Data frame sent\nI0316 21:12:57.253333 197 log.go:172] (0xc0009cc000) (0xc0006df400) Stream removed, broadcasting: 1\nI0316 21:12:57.253422 197 log.go:172] (0xc0009cc000) Go away received\nI0316 21:12:57.253702 197 log.go:172] (0xc0009cc000) (0xc0006df400) Stream removed, broadcasting: 1\nI0316 21:12:57.253727 197 log.go:172] (0xc0009cc000) (0xc000260960) Stream removed, broadcasting: 3\nI0316 21:12:57.253745 197 log.go:172] (0xc0009cc000) (0xc0007a0000) Stream removed, broadcasting: 5\n" Mar 16 21:12:57.257: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 21:12:57.257: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 21:12:57.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9580 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 21:12:57.472: INFO: stderr: "I0316 21:12:57.381029 222 log.go:172] (0xc000b8c420) (0xc000b26140) Create stream\nI0316 21:12:57.381073 222 log.go:172] (0xc000b8c420) (0xc000b26140) Stream added, broadcasting: 1\nI0316 21:12:57.382972 222 log.go:172] (0xc000b8c420) Reply frame received for 1\nI0316 21:12:57.383005 222 log.go:172] (0xc000b8c420) (0xc000b261e0) Create stream\nI0316 21:12:57.383016 222 log.go:172] (0xc000b8c420) (0xc000b261e0) Stream added, broadcasting: 3\nI0316 21:12:57.383906 222 log.go:172] (0xc000b8c420) Reply frame received for 3\nI0316 21:12:57.383942 222 log.go:172] (0xc000b8c420) (0xc000b26280) Create stream\nI0316 21:12:57.383954 222 log.go:172] (0xc000b8c420) (0xc000b26280) Stream added, broadcasting: 5\nI0316 21:12:57.385456 222 log.go:172] (0xc000b8c420) Reply frame received for 5\nI0316 21:12:57.443318 222 log.go:172] (0xc000b8c420) Data frame received for 5\nI0316 21:12:57.443345 222 log.go:172] (0xc000b26280) (5) Data frame handling\nI0316 21:12:57.443364 222 log.go:172] (0xc000b26280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 21:12:57.467235 222 log.go:172] (0xc000b8c420) Data frame received for 5\nI0316 21:12:57.467255 222 log.go:172] (0xc000b26280) (5) Data frame handling\nI0316 21:12:57.467270 222 log.go:172] (0xc000b8c420) Data frame received for 3\nI0316 21:12:57.467275 222 log.go:172] (0xc000b261e0) (3) Data frame handling\nI0316 21:12:57.467282 222 log.go:172] (0xc000b261e0) (3) Data frame sent\nI0316 21:12:57.467287 222 log.go:172] (0xc000b8c420) Data frame received for 3\nI0316 21:12:57.467291 222 log.go:172] (0xc000b261e0) (3) Data frame handling\nI0316 21:12:57.469296 222 log.go:172] (0xc000b8c420) Data frame received for 1\nI0316 21:12:57.469330 222 log.go:172] (0xc000b26140) (1) Data frame handling\nI0316 21:12:57.469367 222 log.go:172] (0xc000b26140) (1) Data frame sent\nI0316 21:12:57.469398 222 log.go:172] (0xc000b8c420) (0xc000b26140) Stream removed, broadcasting: 1\nI0316 21:12:57.469418 222 log.go:172] (0xc000b8c420) Go away received\nI0316 21:12:57.469684 222 log.go:172] (0xc000b8c420) (0xc000b26140) Stream removed, broadcasting: 1\nI0316 21:12:57.469698 222 log.go:172] (0xc000b8c420) (0xc000b261e0) Stream removed, broadcasting: 3\nI0316 21:12:57.469705 222 log.go:172] (0xc000b8c420) (0xc000b26280) Stream removed, broadcasting: 5\n" Mar 16 21:12:57.472: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 21:12:57.472: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 21:12:57.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9580 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 21:12:57.703: INFO: stderr: "I0316 21:12:57.602988 242 log.go:172] (0xc00010adc0) (0xc000619ae0) Create stream\nI0316 21:12:57.603049 242 log.go:172] (0xc00010adc0) (0xc000619ae0) Stream added, broadcasting: 1\nI0316 21:12:57.609669 242 log.go:172] (0xc00010adc0) Reply frame received for 1\nI0316 21:12:57.609727 242 log.go:172] (0xc00010adc0) (0xc0009f8000) Create stream\nI0316 21:12:57.609741 242 log.go:172] (0xc00010adc0) (0xc0009f8000) Stream added, broadcasting: 3\nI0316 21:12:57.610597 242 log.go:172] (0xc00010adc0) Reply frame received for 3\nI0316 21:12:57.610661 242 log.go:172] (0xc00010adc0) (0xc00024c000) Create stream\nI0316 21:12:57.610685 242 log.go:172] (0xc00010adc0) (0xc00024c000) Stream added, broadcasting: 5\nI0316 21:12:57.611544 242 log.go:172] (0xc00010adc0) Reply frame received for 5\nI0316 21:12:57.668496 242 log.go:172] (0xc00010adc0) Data frame received for 5\nI0316 21:12:57.668541 242 log.go:172] (0xc00024c000) (5) Data frame handling\nI0316 21:12:57.668571 242 log.go:172] (0xc00024c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 21:12:57.695574 242 log.go:172] (0xc00010adc0) Data frame received for 5\nI0316 21:12:57.695624 242 log.go:172] (0xc00024c000) (5) Data frame handling\nI0316 21:12:57.695669 242 log.go:172] (0xc00010adc0) Data frame received for 3\nI0316 21:12:57.695722 242 log.go:172] (0xc0009f8000) (3) Data frame handling\nI0316 21:12:57.695816 242 log.go:172] (0xc0009f8000) (3) Data frame sent\nI0316 21:12:57.695850 242 log.go:172] (0xc00010adc0) Data frame received for 3\nI0316 21:12:57.695865 242 log.go:172] (0xc0009f8000) (3) Data frame handling\nI0316 21:12:57.698081 242 log.go:172] (0xc00010adc0) Data frame received for 1\nI0316 21:12:57.698107 242 log.go:172] (0xc000619ae0) (1) Data frame handling\nI0316 21:12:57.698130 242 log.go:172] (0xc000619ae0) (1) Data frame sent\nI0316 21:12:57.698145 242 log.go:172] (0xc00010adc0) (0xc000619ae0) Stream removed, broadcasting: 1\nI0316 21:12:57.698299 242 log.go:172] (0xc00010adc0) Go away received\nI0316 21:12:57.698543 242 log.go:172] (0xc00010adc0) (0xc000619ae0) Stream removed, broadcasting: 1\nI0316 21:12:57.698563 242 log.go:172] (0xc00010adc0) (0xc0009f8000) Stream removed, broadcasting: 3\nI0316 21:12:57.698575 242 log.go:172] (0xc00010adc0) (0xc00024c000) Stream removed, broadcasting: 5\n" Mar 16 21:12:57.703: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 21:12:57.703: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 21:12:57.703: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 21:12:57.705: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 16 21:13:07.714: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 21:13:07.714: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 16 21:13:07.714: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 16 21:13:07.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999481s Mar 16 21:13:08.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990229028s Mar 16 21:13:09.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985308018s Mar 16 21:13:10.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980617703s Mar 16 21:13:11.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975359016s Mar 16 21:13:12.756: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970020457s Mar 16 21:13:13.761: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.96471915s Mar 16 21:13:14.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.959365171s Mar 16 21:13:15.771: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954220069s Mar 16 21:13:16.776: INFO: Verifying statefulset ss doesn't scale past 3 for another 949.172903ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9580 Mar 16 21:13:17.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9580 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 21:13:18.015: INFO: stderr: "I0316 21:13:17.920805 265 log.go:172] (0xc000ae20b0) (0xc0004e5400) Create stream\nI0316 21:13:17.920890 265 log.go:172] (0xc000ae20b0) (0xc0004e5400) Stream added, broadcasting: 1\nI0316 21:13:17.923551 265 log.go:172] (0xc000ae20b0) Reply frame received for 1\nI0316 21:13:17.923602 265 log.go:172] (0xc000ae20b0) (0xc000a36000) Create stream\nI0316 21:13:17.923618 265 log.go:172] (0xc000ae20b0) (0xc000a36000) Stream added, broadcasting: 3\nI0316 21:13:17.924874 265 log.go:172] (0xc000ae20b0) Reply frame received for 3\nI0316 21:13:17.924929 265 log.go:172] (0xc000ae20b0) (0xc0007359a0) Create stream\nI0316 21:13:17.924959 265 log.go:172] (0xc000ae20b0) (0xc0007359a0) Stream added, broadcasting: 5\nI0316 21:13:17.926294 265 log.go:172] (0xc000ae20b0) Reply frame received for 5\nI0316 21:13:18.009701 265 log.go:172] (0xc000ae20b0) Data frame received for 5\nI0316 21:13:18.009738 265 log.go:172] (0xc0007359a0) (5) Data frame handling\nI0316 21:13:18.009755 265 log.go:172] (0xc0007359a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 21:13:18.009771 265 log.go:172] (0xc000ae20b0) Data frame received for 3\nI0316 21:13:18.009777 265 log.go:172] (0xc000a36000) (3) Data frame handling\nI0316 21:13:18.009783 265 log.go:172] (0xc000a36000) (3) Data frame sent\nI0316 21:13:18.009789 265 log.go:172] (0xc000ae20b0) Data frame received for 3\nI0316 21:13:18.009794 265 log.go:172] (0xc000a36000) (3) Data frame handling\nI0316 21:13:18.010110 265 log.go:172] (0xc000ae20b0) Data frame received for 5\nI0316 21:13:18.010146 265 log.go:172] (0xc0007359a0) (5) Data frame handling\nI0316 21:13:18.011490 265 log.go:172] (0xc000ae20b0) Data frame received for 1\nI0316 21:13:18.011513 265 log.go:172] (0xc0004e5400) (1) Data frame handling\nI0316 21:13:18.011527 265 log.go:172] (0xc0004e5400) (1) Data frame sent\nI0316 21:13:18.011546 265 log.go:172] (0xc000ae20b0) (0xc0004e5400) Stream removed, broadcasting: 1\nI0316 21:13:18.011570 265 log.go:172] (0xc000ae20b0) Go away received\nI0316 21:13:18.011851 265 log.go:172] (0xc000ae20b0) (0xc0004e5400) Stream removed, broadcasting: 1\nI0316 21:13:18.011865 265 log.go:172] (0xc000ae20b0) (0xc000a36000) Stream removed, broadcasting: 3\nI0316 21:13:18.011873 265 log.go:172] (0xc000ae20b0) (0xc0007359a0) Stream removed, broadcasting: 5\n" Mar 16 21:13:18.016: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 21:13:18.016: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 21:13:18.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9580 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 21:13:18.224: INFO: stderr: "I0316 21:13:18.152843 287 log.go:172] (0xc0000f56b0) (0xc0007b01e0) Create stream\nI0316 21:13:18.152905 287 log.go:172] (0xc0000f56b0) (0xc0007b01e0) Stream added, broadcasting: 1\nI0316 21:13:18.155445 287 log.go:172] (0xc0000f56b0) Reply frame received for 1\nI0316 21:13:18.155483 287 log.go:172] (0xc0000f56b0) (0xc000665b80) Create stream\nI0316 21:13:18.155494 287 log.go:172] (0xc0000f56b0) (0xc000665b80) Stream added, broadcasting: 3\nI0316 21:13:18.156136 287 log.go:172] (0xc0000f56b0) Reply frame received for 3\nI0316 21:13:18.156160 287 log.go:172] (0xc0000f56b0) (0xc000713540) Create stream\nI0316 21:13:18.156169 287 log.go:172] (0xc0000f56b0) (0xc000713540) Stream added, broadcasting: 5\nI0316 21:13:18.156869 287 log.go:172] (0xc0000f56b0) Reply frame received for 5\nI0316 21:13:18.219205 287 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0316 21:13:18.219234 287 log.go:172] (0xc000713540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 21:13:18.219262 287 log.go:172] (0xc0000f56b0) Data frame received for 3\nI0316 21:13:18.219301 287 log.go:172] (0xc000665b80) (3) Data frame handling\nI0316 21:13:18.219346 287 log.go:172] (0xc000665b80) (3) Data frame sent\nI0316 21:13:18.219365 287 log.go:172] (0xc0000f56b0) Data frame received for 3\nI0316 21:13:18.219402 287 log.go:172] (0xc000665b80) (3) Data frame handling\nI0316 21:13:18.219455 287 log.go:172] (0xc000713540) (5) Data frame sent\nI0316 21:13:18.219604 287 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0316 21:13:18.219630 287 log.go:172] (0xc000713540) (5) Data frame handling\nI0316 21:13:18.221547 287 log.go:172] (0xc0000f56b0) Data frame received for 1\nI0316 21:13:18.221570 287 log.go:172] (0xc0007b01e0) (1) Data frame handling\nI0316 21:13:18.221589 287 log.go:172] (0xc0007b01e0) (1) Data frame sent\nI0316 21:13:18.221603 287 log.go:172] (0xc0000f56b0) (0xc0007b01e0) Stream removed, broadcasting: 1\nI0316 21:13:18.221701 287 log.go:172] (0xc0000f56b0) Go away received\nI0316 21:13:18.222017 287 log.go:172] (0xc0000f56b0) (0xc0007b01e0) Stream removed, broadcasting: 1\nI0316 21:13:18.222039 287 log.go:172] (0xc0000f56b0) (0xc000665b80) Stream removed, broadcasting: 3\nI0316 21:13:18.222063 287 log.go:172] (0xc0000f56b0) (0xc000713540) Stream removed, broadcasting: 5\n" Mar 16 21:13:18.225: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 21:13:18.225: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 21:13:18.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9580 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 21:13:18.439: INFO: stderr: "I0316 21:13:18.362410 307 log.go:172] (0xc000104e70) (0xc000998000) Create stream\nI0316 21:13:18.362472 307 log.go:172] (0xc000104e70) (0xc000998000) Stream added, broadcasting: 1\nI0316 21:13:18.365470 307 log.go:172] (0xc000104e70) Reply frame received for 1\nI0316 21:13:18.365516 307 log.go:172] (0xc000104e70) (0xc00070fae0) Create stream\nI0316 21:13:18.365532 307 log.go:172] (0xc000104e70) (0xc00070fae0) Stream added, broadcasting: 3\nI0316 21:13:18.366478 307 log.go:172] (0xc000104e70) Reply frame received for 3\nI0316 21:13:18.366516 307 log.go:172] (0xc000104e70) (0xc0009980a0) Create stream\nI0316 21:13:18.366528 307 log.go:172] (0xc000104e70) (0xc0009980a0) Stream added, broadcasting: 5\nI0316 21:13:18.367466 307 log.go:172] (0xc000104e70) Reply frame received for 5\nI0316 21:13:18.432955 307 log.go:172] (0xc000104e70) Data frame received for 5\nI0316 21:13:18.432998 307 log.go:172] (0xc0009980a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 21:13:18.433043 307 log.go:172] (0xc000104e70) Data frame received for 3\nI0316 21:13:18.433087 307 log.go:172] (0xc00070fae0) (3) Data frame handling\nI0316 21:13:18.433263 307 log.go:172] (0xc00070fae0) (3) Data frame sent\nI0316 21:13:18.433294 307 log.go:172] (0xc0009980a0) (5) Data frame sent\nI0316 21:13:18.433326 307 log.go:172] (0xc000104e70) Data frame received for 5\nI0316 21:13:18.433345 307 log.go:172] (0xc0009980a0) (5) Data frame handling\nI0316 21:13:18.433561 307 log.go:172] (0xc000104e70) Data frame received for 3\nI0316 21:13:18.433669 307 log.go:172] (0xc00070fae0) (3) Data frame handling\nI0316 21:13:18.435189 307 log.go:172] (0xc000104e70) Data frame received for 1\nI0316 21:13:18.435226 307 log.go:172] (0xc000998000) (1) Data frame handling\nI0316 21:13:18.435247 307 log.go:172] (0xc000998000) (1) Data frame sent\nI0316 21:13:18.435271 307 log.go:172] (0xc000104e70) (0xc000998000) Stream removed, broadcasting: 1\nI0316 21:13:18.435531 307 log.go:172] (0xc000104e70) Go away received\nI0316 21:13:18.435686 307 log.go:172] (0xc000104e70) (0xc000998000) Stream removed, broadcasting: 1\nI0316 21:13:18.435720 307 log.go:172] (0xc000104e70) (0xc00070fae0) Stream removed, broadcasting: 3\nI0316 21:13:18.435739 307 log.go:172] (0xc000104e70) (0xc0009980a0) Stream removed, broadcasting: 5\n" Mar 16 21:13:18.439: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 21:13:18.439: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 21:13:18.439: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 16 21:13:28.454: INFO: Deleting all statefulset in ns statefulset-9580 Mar 16 21:13:28.458: INFO: Scaling statefulset ss to 0 Mar 16 21:13:28.466: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 21:13:28.469: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:13:28.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9580" for this suite. • [SLOW TEST:72.182 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":11,"skipped":237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:13:28.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-f64ba254-fc46-464b-a2a2-285c8c15cee7 STEP: Creating secret with name s-test-opt-upd-22e5caac-3e44-41f8-90d9-ebc068e73ae3 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f64ba254-fc46-464b-a2a2-285c8c15cee7 STEP: Updating secret s-test-opt-upd-22e5caac-3e44-41f8-90d9-ebc068e73ae3 STEP: Creating secret with name s-test-opt-create-0b0f88ec-a554-44b7-8bb4-a859e297de82 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:14:40.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8353" for this suite. • [SLOW TEST:72.479 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":279,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:14:40.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 16 21:14:49.087: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 21:14:49.091: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 21:14:51.091: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 21:14:51.095: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 21:14:53.091: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 21:14:53.102: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:14:53.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3694" for this suite. • [SLOW TEST:12.152 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":294,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:14:53.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 16 21:14:53.179: INFO: Waiting up to 5m0s for pod "downward-api-04e337d3-1625-4d48-818e-704c9a152b8c" in namespace "downward-api-1724" to be "success or failure" Mar 16 21:14:53.196: INFO: Pod "downward-api-04e337d3-1625-4d48-818e-704c9a152b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.827713ms Mar 16 21:14:55.199: INFO: Pod "downward-api-04e337d3-1625-4d48-818e-704c9a152b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020574675s Mar 16 21:14:57.203: INFO: Pod "downward-api-04e337d3-1625-4d48-818e-704c9a152b8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024403585s STEP: Saw pod success Mar 16 21:14:57.203: INFO: Pod "downward-api-04e337d3-1625-4d48-818e-704c9a152b8c" satisfied condition "success or failure" Mar 16 21:14:57.211: INFO: Trying to get logs from node jerma-worker pod downward-api-04e337d3-1625-4d48-818e-704c9a152b8c container dapi-container: STEP: delete the pod Mar 16 21:14:57.251: INFO: Waiting for pod downward-api-04e337d3-1625-4d48-818e-704c9a152b8c to disappear Mar 16 21:14:57.276: INFO: Pod downward-api-04e337d3-1625-4d48-818e-704c9a152b8c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:14:57.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1724" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":348,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:14:57.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 21:14:57.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41ea6b81-8cb9-4a0d-9ef7-6b4303fa3e66" in namespace "projected-4304" to be "success or failure" Mar 16 21:14:57.350: INFO: Pod "downwardapi-volume-41ea6b81-8cb9-4a0d-9ef7-6b4303fa3e66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.648955ms Mar 16 21:14:59.354: INFO: Pod "downwardapi-volume-41ea6b81-8cb9-4a0d-9ef7-6b4303fa3e66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008048533s Mar 16 21:15:01.358: INFO: Pod "downwardapi-volume-41ea6b81-8cb9-4a0d-9ef7-6b4303fa3e66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012691944s STEP: Saw pod success Mar 16 21:15:01.358: INFO: Pod "downwardapi-volume-41ea6b81-8cb9-4a0d-9ef7-6b4303fa3e66" satisfied condition "success or failure" Mar 16 21:15:01.361: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-41ea6b81-8cb9-4a0d-9ef7-6b4303fa3e66 container client-container: STEP: delete the pod Mar 16 21:15:01.383: INFO: Waiting for pod downwardapi-volume-41ea6b81-8cb9-4a0d-9ef7-6b4303fa3e66 to disappear Mar 16 21:15:01.387: INFO: Pod downwardapi-volume-41ea6b81-8cb9-4a0d-9ef7-6b4303fa3e66 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:15:01.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4304" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":367,"failed":0} SSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:15:01.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:15:01.794: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-ac520835-274f-4b07-85f1-5fd665eca7d0" in namespace "security-context-test-9825" to be "success or failure" Mar 16 21:15:01.804: INFO: Pod "alpine-nnp-false-ac520835-274f-4b07-85f1-5fd665eca7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.722711ms Mar 16 21:15:03.863: INFO: Pod "alpine-nnp-false-ac520835-274f-4b07-85f1-5fd665eca7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069695187s Mar 16 21:15:05.868: INFO: Pod "alpine-nnp-false-ac520835-274f-4b07-85f1-5fd665eca7d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074228062s Mar 16 21:15:05.868: INFO: Pod "alpine-nnp-false-ac520835-274f-4b07-85f1-5fd665eca7d0" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:15:05.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9825" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:15:05.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:15:05.942: INFO: Creating deployment "webserver-deployment" Mar 16 21:15:05.945: INFO: Waiting for observed generation 1 Mar 16 21:15:07.958: INFO: Waiting for all required pods to come up Mar 16 21:15:07.961: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 16 21:15:15.989: INFO: Waiting for deployment "webserver-deployment" to complete Mar 16 21:15:15.994: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 16 21:15:16.000: INFO: Updating deployment webserver-deployment Mar 16 21:15:16.000: INFO: Waiting for observed generation 2 Mar 16 21:15:18.025: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 16 21:15:18.028: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 16 21:15:18.052: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 16 21:15:18.088: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 16 21:15:18.089: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 16 21:15:18.091: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 16 21:15:18.094: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 16 21:15:18.094: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 16 21:15:18.099: INFO: Updating deployment webserver-deployment Mar 16 21:15:18.099: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 16 21:15:18.236: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 16 21:15:18.262: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 16 21:15:18.459: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6543 /apis/apps/v1/namespaces/deployment-6543/deployments/webserver-deployment 3dd05cc9-f5d2-4b69-b5d3-ae52f42fdede 313821 3 2020-03-16 21:15:05 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00196fc18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-16 21:15:16 +0000 UTC,LastTransitionTime:2020-03-16 21:15:05 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-16 21:15:18 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 16 21:15:18.543: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-6543 /apis/apps/v1/namespaces/deployment-6543/replicasets/webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 313865 3 2020-03-16 21:15:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 3dd05cc9-f5d2-4b69-b5d3-ae52f42fdede 0xc0026360e7 0xc0026360e8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002636158 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 21:15:18.543: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 16 21:15:18.543: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-6543 /apis/apps/v1/namespaces/deployment-6543/replicasets/webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 313855 3 2020-03-16 21:15:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 3dd05cc9-f5d2-4b69-b5d3-ae52f42fdede 0xc002636027 0xc002636028}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002636088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 16 21:15:18.673: INFO: Pod "webserver-deployment-595b5b9587-4tl8c" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4tl8c webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-4tl8c 67be8c2b-c060-4359-b201-3567ff4f373e 313854 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc002f4f577 0xc002f4f578}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.674: INFO: Pod "webserver-deployment-595b5b9587-4wpxz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4wpxz webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-4wpxz 8fb28a7e-c1ff-4d33-a159-b0698559dea2 313837 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc002f4f690 0xc002f4f691}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.674: INFO: Pod "webserver-deployment-595b5b9587-7zghq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7zghq webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-7zghq fb25c0bf-88cc-4341-9829-b8dd3fa552c4 313723 0 2020-03-16 21:15:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc002f4f7b0 0xc002f4f7b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.231,StartTime:2020-03-16 21:15:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 21:15:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4feced33883460badd8c03d19c55b4966d8d73bdce5aae47039b15cf5ef7286b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.231,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.674: INFO: Pod "webserver-deployment-595b5b9587-9plnp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9plnp webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-9plnp b573ac07-78fe-4130-9ccf-e660524fb10b 313852 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc002f4f927 0xc002f4f928}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.674: INFO: Pod "webserver-deployment-595b5b9587-9r6st" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9r6st webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-9r6st 41a849d4-a532-4b1c-9087-f6b884da9eae 313715 0 2020-03-16 21:15:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc002f4fa60 0xc002f4fa61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.252,StartTime:2020-03-16 21:15:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 21:15:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5618a8d020480b96bc52615eac06f17646aef2c44d4ea42d2874c6e5fa2625b7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.252,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.675: INFO: Pod "webserver-deployment-595b5b9587-bbfnk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bbfnk webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-bbfnk 32c0a5e6-7cf2-43ca-9b64-b1190bdc8d07 313659 0 2020-03-16 21:15:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc002f4fc10 0xc002f4fc11}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.227,StartTime:2020-03-16 21:15:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 21:15:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4c87e75b39d0fa13151db89a35bd85acb48acadca0db5695baf0c41606b5cbd0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.675: INFO: Pod "webserver-deployment-595b5b9587-brt42" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-brt42 webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-brt42 e6cd678e-f81f-49cb-ac63-08096579b038 313687 0 2020-03-16 21:15:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc002f4fd97 0xc002f4fd98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.228,StartTime:2020-03-16 21:15:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 21:15:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://780fe5a60850578eff66df165680d04e80f1e8efe045da4a16fae0108530f7e2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.675: INFO: Pod "webserver-deployment-595b5b9587-db7sl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-db7sl webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-db7sl 54d7273d-a6c3-43fa-bb3b-b60c58a80ab7 313842 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc002f4ff17 0xc002f4ff18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.675: INFO: Pod "webserver-deployment-595b5b9587-dqs9z" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dqs9z webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-dqs9z 0e7d6979-ed23-4aa2-804e-05633887ff19 313859 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281a030 0xc00281a031}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-16 21:15:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.676: INFO: Pod "webserver-deployment-595b5b9587-fbznv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fbznv webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-fbznv 4f94abb5-65a8-4934-bf20-298f861f2d84 313695 0 2020-03-16 21:15:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281a4b0 0xc00281a4b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.251,StartTime:2020-03-16 21:15:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 21:15:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b1528f592c1bd8f80f9e19db6af6af4917d212c766cb7d8a15e95251c2fdf0a5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.251,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.676: INFO: Pod "webserver-deployment-595b5b9587-fjpqv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fjpqv webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-fjpqv b48f4013-2367-461b-8699-bdbe36e32c2d 313822 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281a6c0 0xc00281a6c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.676: INFO: Pod "webserver-deployment-595b5b9587-gs8bc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gs8bc webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-gs8bc 8535be5d-4824-4f86-8d8a-a9cd5ce59167 313853 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281a7d0 0xc00281a7d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.676: INFO: Pod "webserver-deployment-595b5b9587-jr2kc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jr2kc webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-jr2kc 4e4fdf14-31ad-4048-9993-55dcb98c4f93 313851 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281a8e0 0xc00281a8e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.676: INFO: Pod "webserver-deployment-595b5b9587-jxqjn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jxqjn webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-jxqjn fd611b90-ea17-422e-a32b-b251c4a6ee40 313823 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281aac0 0xc00281aac1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.676: INFO: Pod "webserver-deployment-595b5b9587-kvqqm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kvqqm webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-kvqqm 835e9cfb-942e-4544-94c5-c8475971d1ac 313836 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281ad80 0xc00281ad81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.676: INFO: Pod "webserver-deployment-595b5b9587-mcm4f" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mcm4f webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-mcm4f 873309fd-78d4-4849-815f-7666c62952c7 313727 0 2020-03-16 21:15:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281b010 0xc00281b011}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.230,StartTime:2020-03-16 21:15:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 21:15:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6878150a7d6f379f3a63a66835771ccac257772cabfcbe1cef00a89f821e63eb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.230,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.677: INFO: Pod "webserver-deployment-595b5b9587-mprd5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mprd5 webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-mprd5 8b9d3f85-6172-4042-a937-7183f97b6f01 313841 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281b397 0xc00281b398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.677: INFO: Pod "webserver-deployment-595b5b9587-mtbw6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mtbw6 webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-mtbw6 9b2eeec4-bc9c-4e10-abd5-9c457934c460 313850 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281b600 0xc00281b601}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.677: INFO: Pod "webserver-deployment-595b5b9587-nsfcs" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nsfcs webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-nsfcs efdedb9b-97a3-4251-80df-85d46347a9d1 313700 0 2020-03-16 21:15:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281b840 0xc00281b841}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.229,StartTime:2020-03-16 21:15:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 21:15:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://65ef2a101e7f845981a29d669b82478cc6766529bd2b637545cbd9d97f7c55e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.678: INFO: Pod "webserver-deployment-595b5b9587-qv575" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qv575 webserver-deployment-595b5b9587- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-595b5b9587-qv575 418f3e4e-45f8-4b86-836b-ae95f2497f4c 313679 0 2020-03-16 21:15:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fa7d8539-7d18-433d-bbc0-97252aa66c3f 0xc00281bc37 0xc00281bc38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.250,StartTime:2020-03-16 21:15:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 21:15:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ef6687bdde7ffb5672394863ef660b43fe661d85cbe38de16300265cd9adccd3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.250,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.678: INFO: Pod "webserver-deployment-c7997dcc8-45wfv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-45wfv webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-45wfv 2777c917-9f70-4704-9acf-74a680374018 313864 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc00281bef0 0xc00281bef1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-16 21:15:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.678: INFO: Pod "webserver-deployment-c7997dcc8-52tc4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-52tc4 webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-52tc4 15c78deb-2381-4fde-a182-da02979c289a 313771 0 2020-03-16 21:15:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e02b0 0xc0028e02b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-16 21:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.678: INFO: Pod "webserver-deployment-c7997dcc8-5nt5j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5nt5j webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-5nt5j b95136ff-0456-4dc0-b139-9345a2b52c2d 313849 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e0540 0xc0028e0541}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.678: INFO: Pod "webserver-deployment-c7997dcc8-cqft2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cqft2 webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-cqft2 fadd9908-d9b5-41df-99d4-eb76b67d0192 313844 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e07e0 0xc0028e07e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.679: INFO: Pod "webserver-deployment-c7997dcc8-klqdm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-klqdm webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-klqdm e9ac11bb-8bd7-4659-8821-6b47f0769ceb 313790 0 2020-03-16 21:15:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e0970 0xc0028e0971}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-16 21:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.679: INFO: Pod "webserver-deployment-c7997dcc8-lwctx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lwctx webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-lwctx 846c8ed5-8884-4691-9737-3fa9236884fb 313827 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e0ae0 0xc0028e0ae1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.679: INFO: Pod "webserver-deployment-c7997dcc8-n9wjx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n9wjx webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-n9wjx dfac1251-bef8-490a-a2d6-61253c86ad60 313848 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e0c00 0xc0028e0c01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.679: INFO: Pod "webserver-deployment-c7997dcc8-nxjzm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nxjzm webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-nxjzm 38199482-c379-4bc3-9419-065fd9861565 313791 0 2020-03-16 21:15:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e0d20 0xc0028e0d21}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-16 21:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.679: INFO: Pod "webserver-deployment-c7997dcc8-p55fs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p55fs webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-p55fs 9f847eee-26bc-445f-8e30-841348e26b2c 313857 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e0e90 0xc0028e0e91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.679: INFO: Pod "webserver-deployment-c7997dcc8-qsrlt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qsrlt webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-qsrlt 8a96a904-4408-44bc-ae04-e0b7486d4e38 313840 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e0fb0 0xc0028e0fb1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.679: INFO: Pod "webserver-deployment-c7997dcc8-s69hr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s69hr webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-s69hr 6fd1a19a-15f0-4ba1-af36-042a481ab23a 313776 0 2020-03-16 21:15:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e1280 0xc0028e1281}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-16 21:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.679: INFO: Pod "webserver-deployment-c7997dcc8-tfxtn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tfxtn webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-tfxtn 30762846-5d7c-4838-ae58-d9161e01ccca 313763 0 2020-03-16 21:15:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e1560 0xc0028e1561}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-16 21:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 21:15:18.680: INFO: Pod "webserver-deployment-c7997dcc8-wsv9c" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wsv9c webserver-deployment-c7997dcc8- deployment-6543 /api/v1/namespaces/deployment-6543/pods/webserver-deployment-c7997dcc8-wsv9c 231c8423-f540-448f-8d69-69e81981d414 313847 0 2020-03-16 21:15:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d071bf0d-b406-4b11-ab32-2630aac238c5 0xc0028e17f0 0xc0028e17f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rq7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rq7rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rq7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:15:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:15:18.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6543" for this suite. • [SLOW TEST:12.929 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":17,"skipped":419,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:15:18.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:15:19.001: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 16 21:15:21.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2733 create -f -' Mar 16 21:15:33.861: INFO: stderr: "" Mar 16 21:15:33.861: INFO: stdout: "e2e-test-crd-publish-openapi-6066-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 16 21:15:33.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2733 delete e2e-test-crd-publish-openapi-6066-crds test-cr' Mar 16 21:15:33.954: INFO: stderr: "" Mar 16 21:15:33.954: INFO: stdout: "e2e-test-crd-publish-openapi-6066-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 16 21:15:33.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2733 apply -f -' Mar 16 21:15:34.231: INFO: stderr: "" Mar 16 21:15:34.231: INFO: stdout: "e2e-test-crd-publish-openapi-6066-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 16 21:15:34.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2733 delete e2e-test-crd-publish-openapi-6066-crds test-cr' Mar 16 21:15:34.343: INFO: stderr: "" Mar 16 21:15:34.343: INFO: stdout: "e2e-test-crd-publish-openapi-6066-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 16 21:15:34.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6066-crds' Mar 16 21:15:34.611: INFO: stderr: "" Mar 16 21:15:34.611: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6066-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:15:37.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2733" for this suite. • [SLOW TEST:18.915 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":18,"skipped":421,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:15:37.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-c526e2a3-4c10-403e-b1a1-5e1b807dead7 STEP: Creating a pod to test consume secrets Mar 16 21:15:37.951: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1350509c-1500-4075-a39b-fac21638d633" in namespace "projected-4010" to be "success or failure" Mar 16 21:15:37.956: INFO: Pod "pod-projected-secrets-1350509c-1500-4075-a39b-fac21638d633": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345623ms Mar 16 21:15:39.959: INFO: Pod "pod-projected-secrets-1350509c-1500-4075-a39b-fac21638d633": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0077158s Mar 16 21:15:41.963: INFO: Pod "pod-projected-secrets-1350509c-1500-4075-a39b-fac21638d633": Phase="Running", Reason="", readiness=true. Elapsed: 4.011303309s Mar 16 21:15:43.966: INFO: Pod "pod-projected-secrets-1350509c-1500-4075-a39b-fac21638d633": Phase="Running", Reason="", readiness=true. Elapsed: 6.014531005s Mar 16 21:15:45.970: INFO: Pod "pod-projected-secrets-1350509c-1500-4075-a39b-fac21638d633": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018772794s STEP: Saw pod success Mar 16 21:15:45.970: INFO: Pod "pod-projected-secrets-1350509c-1500-4075-a39b-fac21638d633" satisfied condition "success or failure" Mar 16 21:15:45.974: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-1350509c-1500-4075-a39b-fac21638d633 container secret-volume-test: STEP: delete the pod Mar 16 21:15:46.000: INFO: Waiting for pod pod-projected-secrets-1350509c-1500-4075-a39b-fac21638d633 to disappear Mar 16 21:15:46.005: INFO: Pod pod-projected-secrets-1350509c-1500-4075-a39b-fac21638d633 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:15:46.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4010" for this suite. • [SLOW TEST:8.279 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":424,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:15:46.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:15:46.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5402" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":20,"skipped":429,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:15:46.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 16 21:15:46.132: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 21:15:46.165: INFO: Waiting for terminating namespaces to be deleted... Mar 16 21:15:46.168: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 16 21:15:46.173: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:15:46.173: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 21:15:46.173: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:15:46.173: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 21:15:46.173: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 16 21:15:46.179: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:15:46.179: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 21:15:46.179: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:15:46.179: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7a9311f0-8b78-4d56-9ec7-4c6a410baf90 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7a9311f0-8b78-4d56-9ec7-4c6a410baf90 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7a9311f0-8b78-4d56-9ec7-4c6a410baf90 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:15:54.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3859" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.788 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":21,"skipped":441,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:15:54.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 16 21:15:54.918: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:16:02.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3279" for this suite. • [SLOW TEST:7.874 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":22,"skipped":447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:16:02.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 16 21:16:03.164: INFO: Waiting up to 5m0s for pod "pod-89a39e66-e51b-41e0-a3a1-6830a787804b" in namespace "emptydir-9770" to be "success or failure" Mar 16 21:16:03.216: INFO: Pod "pod-89a39e66-e51b-41e0-a3a1-6830a787804b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.674465ms Mar 16 21:16:05.259: INFO: Pod "pod-89a39e66-e51b-41e0-a3a1-6830a787804b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095091996s Mar 16 21:16:07.263: INFO: Pod "pod-89a39e66-e51b-41e0-a3a1-6830a787804b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09889425s STEP: Saw pod success Mar 16 21:16:07.263: INFO: Pod "pod-89a39e66-e51b-41e0-a3a1-6830a787804b" satisfied condition "success or failure" Mar 16 21:16:07.265: INFO: Trying to get logs from node jerma-worker2 pod pod-89a39e66-e51b-41e0-a3a1-6830a787804b container test-container: STEP: delete the pod Mar 16 21:16:07.283: INFO: Waiting for pod pod-89a39e66-e51b-41e0-a3a1-6830a787804b to disappear Mar 16 21:16:07.287: INFO: Pod pod-89a39e66-e51b-41e0-a3a1-6830a787804b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:16:07.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9770" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":524,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:16:07.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 16 21:16:07.367: INFO: Waiting up to 5m0s for pod "pod-d6c2a8df-695d-4175-ac74-13f9eb4e0b1b" in namespace "emptydir-5106" to be "success or failure" Mar 16 21:16:07.393: INFO: Pod "pod-d6c2a8df-695d-4175-ac74-13f9eb4e0b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.73079ms Mar 16 21:16:09.408: INFO: Pod "pod-d6c2a8df-695d-4175-ac74-13f9eb4e0b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040882s Mar 16 21:16:11.434: INFO: Pod "pod-d6c2a8df-695d-4175-ac74-13f9eb4e0b1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067245972s STEP: Saw pod success Mar 16 21:16:11.434: INFO: Pod "pod-d6c2a8df-695d-4175-ac74-13f9eb4e0b1b" satisfied condition "success or failure" Mar 16 21:16:11.437: INFO: Trying to get logs from node jerma-worker pod pod-d6c2a8df-695d-4175-ac74-13f9eb4e0b1b container test-container: STEP: delete the pod Mar 16 21:16:11.472: INFO: Waiting for pod pod-d6c2a8df-695d-4175-ac74-13f9eb4e0b1b to disappear Mar 16 21:16:11.485: INFO: Pod pod-d6c2a8df-695d-4175-ac74-13f9eb4e0b1b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:16:11.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5106" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":528,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:16:11.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:16:11.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4853" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":25,"skipped":535,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:16:11.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-2f8bab24-0c92-42ef-87e6-ebb35bb8499f STEP: Creating a pod to test consume configMaps Mar 16 21:16:11.761: INFO: Waiting up to 5m0s for pod "pod-configmaps-48a1aa12-483f-4249-9e38-c6fa47b53d70" in namespace "configmap-9673" to be "success or failure" Mar 16 21:16:11.788: INFO: Pod "pod-configmaps-48a1aa12-483f-4249-9e38-c6fa47b53d70": Phase="Pending", Reason="", readiness=false. Elapsed: 26.372185ms Mar 16 21:16:13.792: INFO: Pod "pod-configmaps-48a1aa12-483f-4249-9e38-c6fa47b53d70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030707871s Mar 16 21:16:15.796: INFO: Pod "pod-configmaps-48a1aa12-483f-4249-9e38-c6fa47b53d70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034918284s STEP: Saw pod success Mar 16 21:16:15.796: INFO: Pod "pod-configmaps-48a1aa12-483f-4249-9e38-c6fa47b53d70" satisfied condition "success or failure" Mar 16 21:16:15.799: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-48a1aa12-483f-4249-9e38-c6fa47b53d70 container configmap-volume-test: STEP: delete the pod Mar 16 21:16:15.956: INFO: Waiting for pod pod-configmaps-48a1aa12-483f-4249-9e38-c6fa47b53d70 to disappear Mar 16 21:16:15.987: INFO: Pod pod-configmaps-48a1aa12-483f-4249-9e38-c6fa47b53d70 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:16:15.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9673" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":537,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:16:15.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 21:16:16.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-223e4589-454d-4958-9e63-c12c04060c2c" in namespace "downward-api-5872" to be "success or failure" Mar 16 21:16:16.095: INFO: Pod "downwardapi-volume-223e4589-454d-4958-9e63-c12c04060c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.374109ms Mar 16 21:16:18.099: INFO: Pod "downwardapi-volume-223e4589-454d-4958-9e63-c12c04060c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0105011s Mar 16 21:16:20.104: INFO: Pod "downwardapi-volume-223e4589-454d-4958-9e63-c12c04060c2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015008661s STEP: Saw pod success Mar 16 21:16:20.104: INFO: Pod "downwardapi-volume-223e4589-454d-4958-9e63-c12c04060c2c" satisfied condition "success or failure" Mar 16 21:16:20.108: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-223e4589-454d-4958-9e63-c12c04060c2c container client-container: STEP: delete the pod Mar 16 21:16:20.138: INFO: Waiting for pod downwardapi-volume-223e4589-454d-4958-9e63-c12c04060c2c to disappear Mar 16 21:16:20.149: INFO: Pod downwardapi-volume-223e4589-454d-4958-9e63-c12c04060c2c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:16:20.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5872" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":539,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:16:20.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:17:20.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1447" for this suite. • [SLOW TEST:60.088 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":548,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:17:20.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:17:35.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-816" for this suite. STEP: Destroying namespace "nsdeletetest-3549" for this suite. Mar 16 21:17:35.563: INFO: Namespace nsdeletetest-3549 was already deleted STEP: Destroying namespace "nsdeletetest-8161" for this suite. • [SLOW TEST:15.322 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":29,"skipped":554,"failed":0} [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:17:35.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 16 21:17:35.626: INFO: Waiting up to 5m0s for pod "pod-1fd1285c-047f-4e02-a82a-f46f9eefd42d" in namespace "emptydir-9851" to be "success or failure" Mar 16 21:17:35.630: INFO: Pod "pod-1fd1285c-047f-4e02-a82a-f46f9eefd42d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.296253ms Mar 16 21:17:37.634: INFO: Pod "pod-1fd1285c-047f-4e02-a82a-f46f9eefd42d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008039246s Mar 16 21:17:39.638: INFO: Pod "pod-1fd1285c-047f-4e02-a82a-f46f9eefd42d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0115088s STEP: Saw pod success Mar 16 21:17:39.638: INFO: Pod "pod-1fd1285c-047f-4e02-a82a-f46f9eefd42d" satisfied condition "success or failure" Mar 16 21:17:39.640: INFO: Trying to get logs from node jerma-worker2 pod pod-1fd1285c-047f-4e02-a82a-f46f9eefd42d container test-container: STEP: delete the pod Mar 16 21:17:39.661: INFO: Waiting for pod pod-1fd1285c-047f-4e02-a82a-f46f9eefd42d to disappear Mar 16 21:17:39.665: INFO: Pod pod-1fd1285c-047f-4e02-a82a-f46f9eefd42d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:17:39.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9851" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":554,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:17:39.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:17:39.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-826" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":31,"skipped":600,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:17:39.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 21:17:43.963: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:17:44.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4063" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:17:44.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:17:44.186: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 16 21:17:49.194: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 16 21:17:49.194: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 16 21:17:51.198: INFO: Creating deployment "test-rollover-deployment" Mar 16 21:17:51.218: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 16 21:17:53.224: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 16 21:17:53.229: INFO: Ensure that both replica sets have 1 created replica Mar 16 21:17:53.235: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 16 21:17:53.240: INFO: Updating deployment test-rollover-deployment Mar 16 21:17:53.240: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 16 21:17:55.314: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 16 21:17:55.322: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 16 21:17:55.327: INFO: all replica sets need to contain the pod-template-hash label Mar 16 21:17:55.328: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990273, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 21:17:57.343: INFO: all replica sets need to contain the pod-template-hash label Mar 16 21:17:57.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990275, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 21:17:59.335: INFO: all replica sets need to contain the pod-template-hash label Mar 16 21:17:59.335: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990275, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 21:18:01.335: INFO: all replica sets need to contain the pod-template-hash label Mar 16 21:18:01.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990275, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 21:18:03.336: INFO: all replica sets need to contain the pod-template-hash label Mar 16 21:18:03.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990275, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 21:18:05.336: INFO: all replica sets need to contain the pod-template-hash label Mar 16 21:18:05.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990275, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990271, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 21:18:07.338: INFO: Mar 16 21:18:07.338: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 16 21:18:07.345: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6220 /apis/apps/v1/namespaces/deployment-6220/deployments/test-rollover-deployment b8ab353d-34b4-4d25-a2a2-808992441f4a 315073 2 2020-03-16 21:17:51 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002fb9c28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-16 21:17:51 +0000 UTC,LastTransitionTime:2020-03-16 21:17:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-16 21:18:05 +0000 UTC,LastTransitionTime:2020-03-16 21:17:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 16 21:18:07.348: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-6220 /apis/apps/v1/namespaces/deployment-6220/replicasets/test-rollover-deployment-574d6dfbff c100cc21-e2e8-4298-a144-543c4a932a5c 315062 2 2020-03-16 21:17:53 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment b8ab353d-34b4-4d25-a2a2-808992441f4a 0xc002c66637 0xc002c66638}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c666a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 16 21:18:07.348: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 16 21:18:07.349: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6220 /apis/apps/v1/namespaces/deployment-6220/replicasets/test-rollover-controller 68d0c8de-41dd-4743-99ba-33f895f49a0a 315071 2 2020-03-16 21:17:44 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment b8ab353d-34b4-4d25-a2a2-808992441f4a 0xc002c66567 0xc002c66568}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c665c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 21:18:07.349: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6220 /apis/apps/v1/namespaces/deployment-6220/replicasets/test-rollover-deployment-f6c94f66c 4bad0193-479f-42ab-ab7c-3237be78ff1c 315019 2 2020-03-16 21:17:51 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment b8ab353d-34b4-4d25-a2a2-808992441f4a 0xc002c66710 0xc002c66711}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c66788 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 21:18:07.352: INFO: Pod "test-rollover-deployment-574d6dfbff-s75d8" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-s75d8 test-rollover-deployment-574d6dfbff- deployment-6220 /api/v1/namespaces/deployment-6220/pods/test-rollover-deployment-574d6dfbff-s75d8 a844800a-373e-4b35-abb1-975b15c64219 315030 0 2020-03-16 21:17:53 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff c100cc21-e2e8-4298-a144-543c4a932a5c 0xc003f3dd87 0xc003f3dd88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4spcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4spcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4spcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:17:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:17:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:17:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:17:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.24,StartTime:2020-03-16 21:17:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 21:17:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://75acae5924111a2efa1483207317ce2e17acde16d9427af3cbdf98b893a9645d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:18:07.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6220" for this suite. • [SLOW TEST:23.250 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":33,"skipped":637,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:18:07.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 16 21:18:07.448: INFO: Waiting up to 5m0s for pod "downward-api-6946e077-c62c-4167-8a28-6fdb2cb7caf6" in namespace "downward-api-7425" to be "success or failure" Mar 16 21:18:07.451: INFO: Pod "downward-api-6946e077-c62c-4167-8a28-6fdb2cb7caf6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.023338ms Mar 16 21:18:09.456: INFO: Pod "downward-api-6946e077-c62c-4167-8a28-6fdb2cb7caf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007620508s Mar 16 21:18:11.460: INFO: Pod "downward-api-6946e077-c62c-4167-8a28-6fdb2cb7caf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011867274s STEP: Saw pod success Mar 16 21:18:11.460: INFO: Pod "downward-api-6946e077-c62c-4167-8a28-6fdb2cb7caf6" satisfied condition "success or failure" Mar 16 21:18:11.464: INFO: Trying to get logs from node jerma-worker pod downward-api-6946e077-c62c-4167-8a28-6fdb2cb7caf6 container dapi-container: STEP: delete the pod Mar 16 21:18:11.500: INFO: Waiting for pod downward-api-6946e077-c62c-4167-8a28-6fdb2cb7caf6 to disappear Mar 16 21:18:11.505: INFO: Pod downward-api-6946e077-c62c-4167-8a28-6fdb2cb7caf6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:18:11.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7425" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":638,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:18:11.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-b48x STEP: Creating a pod to test atomic-volume-subpath Mar 16 21:18:11.648: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-b48x" in namespace "subpath-9321" to be "success or failure" Mar 16 21:18:11.655: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Pending", Reason="", readiness=false. Elapsed: 7.532698ms Mar 16 21:18:13.659: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011414024s Mar 16 21:18:15.663: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Running", Reason="", readiness=true. Elapsed: 4.015315084s Mar 16 21:18:17.667: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Running", Reason="", readiness=true. Elapsed: 6.019416206s Mar 16 21:18:19.671: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Running", Reason="", readiness=true. Elapsed: 8.023279036s Mar 16 21:18:21.675: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Running", Reason="", readiness=true. Elapsed: 10.027189448s Mar 16 21:18:23.679: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Running", Reason="", readiness=true. Elapsed: 12.031513894s Mar 16 21:18:25.683: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Running", Reason="", readiness=true. Elapsed: 14.03566851s Mar 16 21:18:27.687: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Running", Reason="", readiness=true. Elapsed: 16.039756233s Mar 16 21:18:29.691: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Running", Reason="", readiness=true. Elapsed: 18.043092295s Mar 16 21:18:31.695: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Running", Reason="", readiness=true. Elapsed: 20.047467898s Mar 16 21:18:33.700: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Running", Reason="", readiness=true. Elapsed: 22.051980734s Mar 16 21:18:35.704: INFO: Pod "pod-subpath-test-secret-b48x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056686508s STEP: Saw pod success Mar 16 21:18:35.704: INFO: Pod "pod-subpath-test-secret-b48x" satisfied condition "success or failure" Mar 16 21:18:35.707: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-b48x container test-container-subpath-secret-b48x: STEP: delete the pod Mar 16 21:18:35.743: INFO: Waiting for pod pod-subpath-test-secret-b48x to disappear Mar 16 21:18:35.757: INFO: Pod pod-subpath-test-secret-b48x no longer exists STEP: Deleting pod pod-subpath-test-secret-b48x Mar 16 21:18:35.757: INFO: Deleting pod "pod-subpath-test-secret-b48x" in namespace "subpath-9321" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:18:35.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9321" for this suite. • [SLOW TEST:24.253 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":35,"skipped":652,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:18:35.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 16 21:18:35.885: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9984 /api/v1/namespaces/watch-9984/configmaps/e2e-watch-test-label-changed df1e22c1-63c0-47a5-abe7-54969d08ff0e 315242 0 2020-03-16 21:18:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 16 21:18:35.886: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9984 /api/v1/namespaces/watch-9984/configmaps/e2e-watch-test-label-changed df1e22c1-63c0-47a5-abe7-54969d08ff0e 315243 0 2020-03-16 21:18:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 16 21:18:35.886: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9984 /api/v1/namespaces/watch-9984/configmaps/e2e-watch-test-label-changed df1e22c1-63c0-47a5-abe7-54969d08ff0e 315244 0 2020-03-16 21:18:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 16 21:18:45.941: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9984 /api/v1/namespaces/watch-9984/configmaps/e2e-watch-test-label-changed df1e22c1-63c0-47a5-abe7-54969d08ff0e 315286 0 2020-03-16 21:18:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 16 21:18:45.941: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9984 /api/v1/namespaces/watch-9984/configmaps/e2e-watch-test-label-changed df1e22c1-63c0-47a5-abe7-54969d08ff0e 315287 0 2020-03-16 21:18:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 16 21:18:45.941: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9984 /api/v1/namespaces/watch-9984/configmaps/e2e-watch-test-label-changed df1e22c1-63c0-47a5-abe7-54969d08ff0e 315288 0 2020-03-16 21:18:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:18:45.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9984" for this suite. • [SLOW TEST:10.181 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":36,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:18:45.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2113.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2113.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2113.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2113.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2113.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2113.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 21:18:52.072: INFO: DNS probes using dns-2113/dns-test-d0f01692-1a19-4fe7-ad30-6b379b772d49 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:18:52.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2113" for this suite. • [SLOW TEST:6.238 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":37,"skipped":686,"failed":0} S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:18:52.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4b4fefc8-a213-4fad-b5fb-3d3b56781316 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-4b4fefc8-a213-4fad-b5fb-3d3b56781316 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:19:00.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2590" for this suite. • [SLOW TEST:8.427 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":687,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:19:00.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-05a93adc-6552-4d15-b888-c9eef70d7557 STEP: Creating a pod to test consume secrets Mar 16 21:19:00.676: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a3213ee3-3100-49c0-9c2f-56caf3599c9d" in namespace "projected-9638" to be "success or failure" Mar 16 21:19:00.710: INFO: Pod "pod-projected-secrets-a3213ee3-3100-49c0-9c2f-56caf3599c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.039544ms Mar 16 21:19:02.741: INFO: Pod "pod-projected-secrets-a3213ee3-3100-49c0-9c2f-56caf3599c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064371076s Mar 16 21:19:04.744: INFO: Pod "pod-projected-secrets-a3213ee3-3100-49c0-9c2f-56caf3599c9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067844547s STEP: Saw pod success Mar 16 21:19:04.744: INFO: Pod "pod-projected-secrets-a3213ee3-3100-49c0-9c2f-56caf3599c9d" satisfied condition "success or failure" Mar 16 21:19:04.746: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-a3213ee3-3100-49c0-9c2f-56caf3599c9d container projected-secret-volume-test: STEP: delete the pod Mar 16 21:19:04.781: INFO: Waiting for pod pod-projected-secrets-a3213ee3-3100-49c0-9c2f-56caf3599c9d to disappear Mar 16 21:19:04.792: INFO: Pod pod-projected-secrets-a3213ee3-3100-49c0-9c2f-56caf3599c9d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:19:04.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9638" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":696,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:19:04.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-2567d498-f404-4972-bef3-f6e9b32d0341 STEP: Creating a pod to test consume secrets Mar 16 21:19:04.878: INFO: Waiting up to 5m0s for pod "pod-secrets-c37c88fe-5b42-4255-b43b-009c601d9e18" in namespace "secrets-2427" to be "success or failure" Mar 16 21:19:04.888: INFO: Pod "pod-secrets-c37c88fe-5b42-4255-b43b-009c601d9e18": Phase="Pending", Reason="", readiness=false. Elapsed: 10.20602ms Mar 16 21:19:06.901: INFO: Pod "pod-secrets-c37c88fe-5b42-4255-b43b-009c601d9e18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023095629s Mar 16 21:19:08.905: INFO: Pod "pod-secrets-c37c88fe-5b42-4255-b43b-009c601d9e18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02751958s STEP: Saw pod success Mar 16 21:19:08.905: INFO: Pod "pod-secrets-c37c88fe-5b42-4255-b43b-009c601d9e18" satisfied condition "success or failure" Mar 16 21:19:08.908: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c37c88fe-5b42-4255-b43b-009c601d9e18 container secret-volume-test: STEP: delete the pod Mar 16 21:19:08.926: INFO: Waiting for pod pod-secrets-c37c88fe-5b42-4255-b43b-009c601d9e18 to disappear Mar 16 21:19:08.960: INFO: Pod pod-secrets-c37c88fe-5b42-4255-b43b-009c601d9e18 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:19:08.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2427" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":697,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:19:08.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-9cc98277-56ed-4f12-a0ae-94a6cd4972d0 STEP: Creating a pod to test consume configMaps Mar 16 21:19:09.040: INFO: Waiting up to 5m0s for pod "pod-configmaps-294dfba6-8128-4899-8a3b-bfbc8f9dbdc5" in namespace "configmap-7348" to be "success or failure" Mar 16 21:19:09.050: INFO: Pod "pod-configmaps-294dfba6-8128-4899-8a3b-bfbc8f9dbdc5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.486298ms Mar 16 21:19:11.054: INFO: Pod "pod-configmaps-294dfba6-8128-4899-8a3b-bfbc8f9dbdc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013094383s Mar 16 21:19:13.058: INFO: Pod "pod-configmaps-294dfba6-8128-4899-8a3b-bfbc8f9dbdc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017195529s STEP: Saw pod success Mar 16 21:19:13.058: INFO: Pod "pod-configmaps-294dfba6-8128-4899-8a3b-bfbc8f9dbdc5" satisfied condition "success or failure" Mar 16 21:19:13.060: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-294dfba6-8128-4899-8a3b-bfbc8f9dbdc5 container configmap-volume-test: STEP: delete the pod Mar 16 21:19:13.080: INFO: Waiting for pod pod-configmaps-294dfba6-8128-4899-8a3b-bfbc8f9dbdc5 to disappear Mar 16 21:19:13.100: INFO: Pod pod-configmaps-294dfba6-8128-4899-8a3b-bfbc8f9dbdc5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:19:13.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7348" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":717,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:19:13.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 16 21:19:13.181: INFO: Waiting up to 5m0s for pod "var-expansion-837f1ae4-1852-44b8-bdac-aebd3d6818a5" in namespace "var-expansion-4021" to be "success or failure" Mar 16 21:19:13.192: INFO: Pod "var-expansion-837f1ae4-1852-44b8-bdac-aebd3d6818a5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.52462ms Mar 16 21:19:15.197: INFO: Pod "var-expansion-837f1ae4-1852-44b8-bdac-aebd3d6818a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016314361s Mar 16 21:19:17.201: INFO: Pod "var-expansion-837f1ae4-1852-44b8-bdac-aebd3d6818a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02045694s STEP: Saw pod success Mar 16 21:19:17.201: INFO: Pod "var-expansion-837f1ae4-1852-44b8-bdac-aebd3d6818a5" satisfied condition "success or failure" Mar 16 21:19:17.204: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-837f1ae4-1852-44b8-bdac-aebd3d6818a5 container dapi-container: STEP: delete the pod Mar 16 21:19:17.223: INFO: Waiting for pod var-expansion-837f1ae4-1852-44b8-bdac-aebd3d6818a5 to disappear Mar 16 21:19:17.243: INFO: Pod var-expansion-837f1ae4-1852-44b8-bdac-aebd3d6818a5 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:19:17.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4021" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":728,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:19:17.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 16 21:19:21.828: INFO: Successfully updated pod "pod-update-activedeadlineseconds-21ebd1eb-d644-4808-aa78-a325a3081a6e" Mar 16 21:19:21.829: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-21ebd1eb-d644-4808-aa78-a325a3081a6e" in namespace "pods-6334" to be "terminated due to deadline exceeded" Mar 16 21:19:21.833: INFO: Pod "pod-update-activedeadlineseconds-21ebd1eb-d644-4808-aa78-a325a3081a6e": Phase="Running", Reason="", readiness=true. Elapsed: 4.510691ms Mar 16 21:19:23.944: INFO: Pod "pod-update-activedeadlineseconds-21ebd1eb-d644-4808-aa78-a325a3081a6e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.115902863s Mar 16 21:19:23.945: INFO: Pod "pod-update-activedeadlineseconds-21ebd1eb-d644-4808-aa78-a325a3081a6e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:19:23.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6334" for this suite. • [SLOW TEST:6.702 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":743,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:19:23.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 21:19:24.106: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbfc722a-79c3-40ce-b4d8-146dcb442429" in namespace "downward-api-3986" to be "success or failure" Mar 16 21:19:24.120: INFO: Pod "downwardapi-volume-dbfc722a-79c3-40ce-b4d8-146dcb442429": Phase="Pending", Reason="", readiness=false. Elapsed: 13.234965ms Mar 16 21:19:26.150: INFO: Pod "downwardapi-volume-dbfc722a-79c3-40ce-b4d8-146dcb442429": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043541364s Mar 16 21:19:28.154: INFO: Pod "downwardapi-volume-dbfc722a-79c3-40ce-b4d8-146dcb442429": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047509871s STEP: Saw pod success Mar 16 21:19:28.154: INFO: Pod "downwardapi-volume-dbfc722a-79c3-40ce-b4d8-146dcb442429" satisfied condition "success or failure" Mar 16 21:19:28.157: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dbfc722a-79c3-40ce-b4d8-146dcb442429 container client-container: STEP: delete the pod Mar 16 21:19:28.231: INFO: Waiting for pod downwardapi-volume-dbfc722a-79c3-40ce-b4d8-146dcb442429 to disappear Mar 16 21:19:28.256: INFO: Pod downwardapi-volume-dbfc722a-79c3-40ce-b4d8-146dcb442429 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:19:28.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3986" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":746,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:19:28.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 16 21:19:28.316: INFO: Waiting up to 5m0s for pod "pod-263b2413-2c97-44da-88a1-b35caf46653a" in namespace "emptydir-2590" to be "success or failure" Mar 16 21:19:28.320: INFO: Pod "pod-263b2413-2c97-44da-88a1-b35caf46653a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.839007ms Mar 16 21:19:30.324: INFO: Pod "pod-263b2413-2c97-44da-88a1-b35caf46653a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007461185s Mar 16 21:19:32.333: INFO: Pod "pod-263b2413-2c97-44da-88a1-b35caf46653a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017272835s STEP: Saw pod success Mar 16 21:19:32.333: INFO: Pod "pod-263b2413-2c97-44da-88a1-b35caf46653a" satisfied condition "success or failure" Mar 16 21:19:32.336: INFO: Trying to get logs from node jerma-worker2 pod pod-263b2413-2c97-44da-88a1-b35caf46653a container test-container: STEP: delete the pod Mar 16 21:19:32.373: INFO: Waiting for pod pod-263b2413-2c97-44da-88a1-b35caf46653a to disappear Mar 16 21:19:32.384: INFO: Pod pod-263b2413-2c97-44da-88a1-b35caf46653a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:19:32.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2590" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":757,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:19:32.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 21:19:33.139: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 21:19:35.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990373, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990373, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990373, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990373, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:19:38.176: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:19:38.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-431" for this suite. STEP: Destroying namespace "webhook-431-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.884 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":46,"skipped":774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:19:38.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6694.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6694.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6694.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6694.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6694.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6694.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 21:19:44.396: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:44.399: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:44.402: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:44.405: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:44.414: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:44.417: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:44.420: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:44.422: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:44.428: INFO: Lookups using dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local] Mar 16 21:19:49.432: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:49.436: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:49.439: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:49.442: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:49.452: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:49.456: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:49.459: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:49.462: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:49.468: INFO: Lookups using dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local] Mar 16 21:19:54.433: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:54.437: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:54.441: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:54.444: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:54.454: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:54.456: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:54.459: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:54.462: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:54.468: INFO: Lookups using dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local] Mar 16 21:19:59.433: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:59.436: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:59.438: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:59.440: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:59.448: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:59.451: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:59.454: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:59.508: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:19:59.514: INFO: Lookups using dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local] Mar 16 21:20:04.433: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:04.440: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:04.444: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:04.447: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:04.454: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:04.456: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:04.458: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:04.460: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:04.466: INFO: Lookups using dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local] Mar 16 21:20:09.431: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:09.434: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:09.436: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:09.439: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:09.472: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:09.475: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:09.478: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:09.481: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local from pod dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596: the server could not find the requested resource (get pods dns-test-c379e048-e7b8-46f2-9707-c4e34e558596) Mar 16 21:20:09.487: INFO: Lookups using dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6694.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6694.svc.cluster.local jessie_udp@dns-test-service-2.dns-6694.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6694.svc.cluster.local] Mar 16 21:20:14.464: INFO: DNS probes using dns-6694/dns-test-c379e048-e7b8-46f2-9707-c4e34e558596 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:20:14.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6694" for this suite. • [SLOW TEST:36.671 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":47,"skipped":842,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:20:14.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8902 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8902 STEP: creating replication controller externalsvc in namespace services-8902 I0316 21:20:15.214458 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8902, replica count: 2 I0316 21:20:18.264865 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 21:20:21.265056 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 16 21:20:21.300: INFO: Creating new exec pod Mar 16 21:20:25.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8902 execpod8qk45 -- /bin/sh -x -c nslookup clusterip-service' Mar 16 21:20:25.566: INFO: stderr: "I0316 21:20:25.479398 441 log.go:172] (0xc0005b0d10) (0xc00067dea0) Create stream\nI0316 21:20:25.479452 441 log.go:172] (0xc0005b0d10) (0xc00067dea0) Stream added, broadcasting: 1\nI0316 21:20:25.482268 441 log.go:172] (0xc0005b0d10) Reply frame received for 1\nI0316 21:20:25.482316 441 log.go:172] (0xc0005b0d10) (0xc000791540) Create stream\nI0316 21:20:25.482334 441 log.go:172] (0xc0005b0d10) (0xc000791540) Stream added, broadcasting: 3\nI0316 21:20:25.483387 441 log.go:172] (0xc0005b0d10) Reply frame received for 3\nI0316 21:20:25.483435 441 log.go:172] (0xc0005b0d10) (0xc00067df40) Create stream\nI0316 21:20:25.483454 441 log.go:172] (0xc0005b0d10) (0xc00067df40) Stream added, broadcasting: 5\nI0316 21:20:25.484381 441 log.go:172] (0xc0005b0d10) Reply frame received for 5\nI0316 21:20:25.547259 441 log.go:172] (0xc0005b0d10) Data frame received for 5\nI0316 21:20:25.547289 441 log.go:172] (0xc00067df40) (5) Data frame handling\nI0316 21:20:25.547318 441 log.go:172] (0xc00067df40) (5) Data frame sent\n+ nslookup clusterip-service\nI0316 21:20:25.557729 441 log.go:172] (0xc0005b0d10) Data frame received for 3\nI0316 21:20:25.557768 441 log.go:172] (0xc000791540) (3) Data frame handling\nI0316 21:20:25.557810 441 log.go:172] (0xc000791540) (3) Data frame sent\nI0316 21:20:25.559003 441 log.go:172] (0xc0005b0d10) Data frame received for 3\nI0316 21:20:25.559048 441 log.go:172] (0xc000791540) (3) Data frame handling\nI0316 21:20:25.559089 441 log.go:172] (0xc000791540) (3) Data frame sent\nI0316 21:20:25.559354 441 log.go:172] (0xc0005b0d10) Data frame received for 3\nI0316 21:20:25.559390 441 log.go:172] (0xc0005b0d10) Data frame received for 5\nI0316 21:20:25.559423 441 log.go:172] (0xc00067df40) (5) Data frame handling\nI0316 21:20:25.559464 441 log.go:172] (0xc000791540) (3) Data frame handling\nI0316 21:20:25.561724 441 log.go:172] (0xc0005b0d10) Data frame received for 1\nI0316 21:20:25.561748 441 log.go:172] (0xc00067dea0) (1) Data frame handling\nI0316 21:20:25.561763 441 log.go:172] (0xc00067dea0) (1) Data frame sent\nI0316 21:20:25.561778 441 log.go:172] (0xc0005b0d10) (0xc00067dea0) Stream removed, broadcasting: 1\nI0316 21:20:25.561871 441 log.go:172] (0xc0005b0d10) Go away received\nI0316 21:20:25.562176 441 log.go:172] (0xc0005b0d10) (0xc00067dea0) Stream removed, broadcasting: 1\nI0316 21:20:25.562203 441 log.go:172] (0xc0005b0d10) (0xc000791540) Stream removed, broadcasting: 3\nI0316 21:20:25.562221 441 log.go:172] (0xc0005b0d10) (0xc00067df40) Stream removed, broadcasting: 5\n" Mar 16 21:20:25.566: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8902.svc.cluster.local\tcanonical name = externalsvc.services-8902.svc.cluster.local.\nName:\texternalsvc.services-8902.svc.cluster.local\nAddress: 10.104.170.34\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8902, will wait for the garbage collector to delete the pods Mar 16 21:20:25.654: INFO: Deleting ReplicationController externalsvc took: 12.308794ms Mar 16 21:20:25.955: INFO: Terminating ReplicationController externalsvc pods took: 300.2848ms Mar 16 21:20:39.287: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:20:39.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8902" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:24.376 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":48,"skipped":860,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:20:39.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 16 21:20:39.361: INFO: namespace kubectl-6824 Mar 16 21:20:39.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6824' Mar 16 21:20:39.685: INFO: stderr: "" Mar 16 21:20:39.685: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 16 21:20:40.766: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 21:20:40.766: INFO: Found 0 / 1 Mar 16 21:20:41.688: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 21:20:41.689: INFO: Found 0 / 1 Mar 16 21:20:42.690: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 21:20:42.690: INFO: Found 1 / 1 Mar 16 21:20:42.690: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 16 21:20:42.694: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 21:20:42.694: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 21:20:42.694: INFO: wait on agnhost-master startup in kubectl-6824 Mar 16 21:20:42.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-6l2k9 agnhost-master --namespace=kubectl-6824' Mar 16 21:20:42.801: INFO: stderr: "" Mar 16 21:20:42.801: INFO: stdout: "Paused\n" STEP: exposing RC Mar 16 21:20:42.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6824' Mar 16 21:20:42.967: INFO: stderr: "" Mar 16 21:20:42.967: INFO: stdout: "service/rm2 exposed\n" Mar 16 21:20:42.969: INFO: Service rm2 in namespace kubectl-6824 found. STEP: exposing service Mar 16 21:20:44.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6824' Mar 16 21:20:45.178: INFO: stderr: "" Mar 16 21:20:45.178: INFO: stdout: "service/rm3 exposed\n" Mar 16 21:20:45.189: INFO: Service rm3 in namespace kubectl-6824 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:20:47.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6824" for this suite. • [SLOW TEST:7.881 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":49,"skipped":894,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:20:47.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-99febac5-46cc-460a-b1e1-855bb9bfc638 STEP: Creating secret with name secret-projected-all-test-volume-9902331d-f941-4e7f-8ac0-f29c4fd1fa03 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 16 21:20:47.351: INFO: Waiting up to 5m0s for pod "projected-volume-37d8c808-bde1-4dad-976f-72de9adedff2" in namespace "projected-5702" to be "success or failure" Mar 16 21:20:47.356: INFO: Pod "projected-volume-37d8c808-bde1-4dad-976f-72de9adedff2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.812124ms Mar 16 21:20:49.360: INFO: Pod "projected-volume-37d8c808-bde1-4dad-976f-72de9adedff2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008903779s Mar 16 21:20:51.364: INFO: Pod "projected-volume-37d8c808-bde1-4dad-976f-72de9adedff2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012902632s STEP: Saw pod success Mar 16 21:20:51.364: INFO: Pod "projected-volume-37d8c808-bde1-4dad-976f-72de9adedff2" satisfied condition "success or failure" Mar 16 21:20:51.368: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-37d8c808-bde1-4dad-976f-72de9adedff2 container projected-all-volume-test: STEP: delete the pod Mar 16 21:20:51.388: INFO: Waiting for pod projected-volume-37d8c808-bde1-4dad-976f-72de9adedff2 to disappear Mar 16 21:20:51.392: INFO: Pod projected-volume-37d8c808-bde1-4dad-976f-72de9adedff2 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:20:51.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5702" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":50,"skipped":978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:20:51.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 16 21:20:51.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-724' Mar 16 21:20:51.643: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 16 21:20:51.643: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793 Mar 16 21:20:51.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-724' Mar 16 21:20:51.775: INFO: stderr: "" Mar 16 21:20:51.775: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:20:51.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-724" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":51,"skipped":1009,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:20:51.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:20:51.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2888" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":52,"skipped":1028,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:20:51.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7504 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 16 21:20:51.913: INFO: Found 0 stateful pods, waiting for 3 Mar 16 21:21:01.928: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 21:21:01.928: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 21:21:01.928: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 16 21:21:11.918: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 21:21:11.918: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 21:21:11.918: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 16 21:21:11.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7504 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 21:21:12.199: INFO: stderr: "I0316 21:21:12.069006 589 log.go:172] (0xc00054b1e0) (0xc000609c20) Create stream\nI0316 21:21:12.069075 589 log.go:172] (0xc00054b1e0) (0xc000609c20) Stream added, broadcasting: 1\nI0316 21:21:12.072659 589 log.go:172] (0xc00054b1e0) Reply frame received for 1\nI0316 21:21:12.072712 589 log.go:172] (0xc00054b1e0) (0xc000382000) Create stream\nI0316 21:21:12.072727 589 log.go:172] (0xc00054b1e0) (0xc000382000) Stream added, broadcasting: 3\nI0316 21:21:12.073874 589 log.go:172] (0xc00054b1e0) Reply frame received for 3\nI0316 21:21:12.073902 589 log.go:172] (0xc00054b1e0) (0xc0003820a0) Create stream\nI0316 21:21:12.073911 589 log.go:172] (0xc00054b1e0) (0xc0003820a0) Stream added, broadcasting: 5\nI0316 21:21:12.074627 589 log.go:172] (0xc00054b1e0) Reply frame received for 5\nI0316 21:21:12.160822 589 log.go:172] (0xc00054b1e0) Data frame received for 5\nI0316 21:21:12.160842 589 log.go:172] (0xc0003820a0) (5) Data frame handling\nI0316 21:21:12.160855 589 log.go:172] (0xc0003820a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 21:21:12.192350 589 log.go:172] (0xc00054b1e0) Data frame received for 3\nI0316 21:21:12.192394 589 log.go:172] (0xc000382000) (3) Data frame handling\nI0316 21:21:12.192426 589 log.go:172] (0xc000382000) (3) Data frame sent\nI0316 21:21:12.192445 589 log.go:172] (0xc00054b1e0) Data frame received for 3\nI0316 21:21:12.192461 589 log.go:172] (0xc000382000) (3) Data frame handling\nI0316 21:21:12.192530 589 log.go:172] (0xc00054b1e0) Data frame received for 5\nI0316 21:21:12.192562 589 log.go:172] (0xc0003820a0) (5) Data frame handling\nI0316 21:21:12.195099 589 log.go:172] (0xc00054b1e0) Data frame received for 1\nI0316 21:21:12.195128 589 log.go:172] (0xc000609c20) (1) Data frame handling\nI0316 21:21:12.195140 589 log.go:172] (0xc000609c20) (1) Data frame sent\nI0316 21:21:12.195234 589 log.go:172] (0xc00054b1e0) (0xc000609c20) Stream removed, broadcasting: 1\nI0316 21:21:12.195296 589 log.go:172] (0xc00054b1e0) Go away received\nI0316 21:21:12.195779 589 log.go:172] (0xc00054b1e0) (0xc000609c20) Stream removed, broadcasting: 1\nI0316 21:21:12.195807 589 log.go:172] (0xc00054b1e0) (0xc000382000) Stream removed, broadcasting: 3\nI0316 21:21:12.195819 589 log.go:172] (0xc00054b1e0) (0xc0003820a0) Stream removed, broadcasting: 5\n" Mar 16 21:21:12.199: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 21:21:12.199: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 16 21:21:22.231: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 16 21:21:32.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7504 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 21:21:32.488: INFO: stderr: "I0316 21:21:32.407012 611 log.go:172] (0xc0007c0b00) (0xc0007b2000) Create stream\nI0316 21:21:32.407061 611 log.go:172] (0xc0007c0b00) (0xc0007b2000) Stream added, broadcasting: 1\nI0316 21:21:32.409847 611 log.go:172] (0xc0007c0b00) Reply frame received for 1\nI0316 21:21:32.409897 611 log.go:172] (0xc0007c0b00) (0xc000970000) Create stream\nI0316 21:21:32.409905 611 log.go:172] (0xc0007c0b00) (0xc000970000) Stream added, broadcasting: 3\nI0316 21:21:32.410941 611 log.go:172] (0xc0007c0b00) Reply frame received for 3\nI0316 21:21:32.410984 611 log.go:172] (0xc0007c0b00) (0xc0005e1a40) Create stream\nI0316 21:21:32.411001 611 log.go:172] (0xc0007c0b00) (0xc0005e1a40) Stream added, broadcasting: 5\nI0316 21:21:32.411845 611 log.go:172] (0xc0007c0b00) Reply frame received for 5\nI0316 21:21:32.484052 611 log.go:172] (0xc0007c0b00) Data frame received for 3\nI0316 21:21:32.484096 611 log.go:172] (0xc000970000) (3) Data frame handling\nI0316 21:21:32.484127 611 log.go:172] (0xc0007c0b00) Data frame received for 5\nI0316 21:21:32.484210 611 log.go:172] (0xc0005e1a40) (5) Data frame handling\nI0316 21:21:32.484229 611 log.go:172] (0xc0005e1a40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 21:21:32.484250 611 log.go:172] (0xc000970000) (3) Data frame sent\nI0316 21:21:32.484283 611 log.go:172] (0xc0007c0b00) Data frame received for 3\nI0316 21:21:32.484301 611 log.go:172] (0xc000970000) (3) Data frame handling\nI0316 21:21:32.484329 611 log.go:172] (0xc0007c0b00) Data frame received for 5\nI0316 21:21:32.484346 611 log.go:172] (0xc0005e1a40) (5) Data frame handling\nI0316 21:21:32.485416 611 log.go:172] (0xc0007c0b00) Data frame received for 1\nI0316 21:21:32.485431 611 log.go:172] (0xc0007b2000) (1) Data frame handling\nI0316 21:21:32.485449 611 log.go:172] (0xc0007b2000) (1) Data frame sent\nI0316 21:21:32.485460 611 log.go:172] (0xc0007c0b00) (0xc0007b2000) Stream removed, broadcasting: 1\nI0316 21:21:32.485586 611 log.go:172] (0xc0007c0b00) Go away received\nI0316 21:21:32.485804 611 log.go:172] (0xc0007c0b00) (0xc0007b2000) Stream removed, broadcasting: 1\nI0316 21:21:32.485820 611 log.go:172] (0xc0007c0b00) (0xc000970000) Stream removed, broadcasting: 3\nI0316 21:21:32.485829 611 log.go:172] (0xc0007c0b00) (0xc0005e1a40) Stream removed, broadcasting: 5\n" Mar 16 21:21:32.488: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 21:21:32.488: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 21:21:42.510: INFO: Waiting for StatefulSet statefulset-7504/ss2 to complete update Mar 16 21:21:42.510: INFO: Waiting for Pod statefulset-7504/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 16 21:21:42.510: INFO: Waiting for Pod statefulset-7504/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 16 21:21:52.517: INFO: Waiting for StatefulSet statefulset-7504/ss2 to complete update Mar 16 21:21:52.517: INFO: Waiting for Pod statefulset-7504/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 16 21:22:02.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7504 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 21:22:02.782: INFO: stderr: "I0316 21:22:02.661310 632 log.go:172] (0xc000589290) (0xc000ad0000) Create stream\nI0316 21:22:02.661383 632 log.go:172] (0xc000589290) (0xc000ad0000) Stream added, broadcasting: 1\nI0316 21:22:02.664343 632 log.go:172] (0xc000589290) Reply frame received for 1\nI0316 21:22:02.664399 632 log.go:172] (0xc000589290) (0xc000625b80) Create stream\nI0316 21:22:02.664428 632 log.go:172] (0xc000589290) (0xc000625b80) Stream added, broadcasting: 3\nI0316 21:22:02.665847 632 log.go:172] (0xc000589290) Reply frame received for 3\nI0316 21:22:02.665881 632 log.go:172] (0xc000589290) (0xc000625d60) Create stream\nI0316 21:22:02.665894 632 log.go:172] (0xc000589290) (0xc000625d60) Stream added, broadcasting: 5\nI0316 21:22:02.666993 632 log.go:172] (0xc000589290) Reply frame received for 5\nI0316 21:22:02.752173 632 log.go:172] (0xc000589290) Data frame received for 5\nI0316 21:22:02.752198 632 log.go:172] (0xc000625d60) (5) Data frame handling\nI0316 21:22:02.752220 632 log.go:172] (0xc000625d60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 21:22:02.775478 632 log.go:172] (0xc000589290) Data frame received for 3\nI0316 21:22:02.775520 632 log.go:172] (0xc000625b80) (3) Data frame handling\nI0316 21:22:02.775560 632 log.go:172] (0xc000625b80) (3) Data frame sent\nI0316 21:22:02.775946 632 log.go:172] (0xc000589290) Data frame received for 5\nI0316 21:22:02.775958 632 log.go:172] (0xc000625d60) (5) Data frame handling\nI0316 21:22:02.775980 632 log.go:172] (0xc000589290) Data frame received for 3\nI0316 21:22:02.775997 632 log.go:172] (0xc000625b80) (3) Data frame handling\nI0316 21:22:02.777655 632 log.go:172] (0xc000589290) Data frame received for 1\nI0316 21:22:02.777675 632 log.go:172] (0xc000ad0000) (1) Data frame handling\nI0316 21:22:02.777695 632 log.go:172] (0xc000ad0000) (1) Data frame sent\nI0316 21:22:02.777719 632 log.go:172] (0xc000589290) (0xc000ad0000) Stream removed, broadcasting: 1\nI0316 21:22:02.777965 632 log.go:172] (0xc000589290) Go away received\nI0316 21:22:02.778074 632 log.go:172] (0xc000589290) (0xc000ad0000) Stream removed, broadcasting: 1\nI0316 21:22:02.778094 632 log.go:172] (0xc000589290) (0xc000625b80) Stream removed, broadcasting: 3\nI0316 21:22:02.778104 632 log.go:172] (0xc000589290) (0xc000625d60) Stream removed, broadcasting: 5\n" Mar 16 21:22:02.782: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 21:22:02.782: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 21:22:12.814: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 16 21:22:22.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7504 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 21:22:23.089: INFO: stderr: "I0316 21:22:22.986250 655 log.go:172] (0xc000b6a000) (0xc000920000) Create stream\nI0316 21:22:22.986316 655 log.go:172] (0xc000b6a000) (0xc000920000) Stream added, broadcasting: 1\nI0316 21:22:22.989591 655 log.go:172] (0xc000b6a000) Reply frame received for 1\nI0316 21:22:22.989661 655 log.go:172] (0xc000b6a000) (0xc0009200a0) Create stream\nI0316 21:22:22.989685 655 log.go:172] (0xc000b6a000) (0xc0009200a0) Stream added, broadcasting: 3\nI0316 21:22:22.991152 655 log.go:172] (0xc000b6a000) Reply frame received for 3\nI0316 21:22:22.991204 655 log.go:172] (0xc000b6a000) (0xc000920140) Create stream\nI0316 21:22:22.991225 655 log.go:172] (0xc000b6a000) (0xc000920140) Stream added, broadcasting: 5\nI0316 21:22:22.992106 655 log.go:172] (0xc000b6a000) Reply frame received for 5\nI0316 21:22:23.080715 655 log.go:172] (0xc000b6a000) Data frame received for 3\nI0316 21:22:23.080753 655 log.go:172] (0xc0009200a0) (3) Data frame handling\nI0316 21:22:23.080789 655 log.go:172] (0xc0009200a0) (3) Data frame sent\nI0316 21:22:23.080819 655 log.go:172] (0xc000b6a000) Data frame received for 5\nI0316 21:22:23.080849 655 log.go:172] (0xc000920140) (5) Data frame handling\nI0316 21:22:23.080865 655 log.go:172] (0xc000920140) (5) Data frame sent\nI0316 21:22:23.080892 655 log.go:172] (0xc000b6a000) Data frame received for 5\nI0316 21:22:23.080903 655 log.go:172] (0xc000920140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 21:22:23.080933 655 log.go:172] (0xc000b6a000) Data frame received for 3\nI0316 21:22:23.080950 655 log.go:172] (0xc0009200a0) (3) Data frame handling\nI0316 21:22:23.084136 655 log.go:172] (0xc000b6a000) Data frame received for 1\nI0316 21:22:23.084174 655 log.go:172] (0xc000920000) (1) Data frame handling\nI0316 21:22:23.084202 655 log.go:172] (0xc000920000) (1) Data frame sent\nI0316 21:22:23.084224 655 log.go:172] (0xc000b6a000) (0xc000920000) Stream removed, broadcasting: 1\nI0316 21:22:23.084345 655 log.go:172] (0xc000b6a000) Go away received\nI0316 21:22:23.084626 655 log.go:172] (0xc000b6a000) (0xc000920000) Stream removed, broadcasting: 1\nI0316 21:22:23.084640 655 log.go:172] (0xc000b6a000) (0xc0009200a0) Stream removed, broadcasting: 3\nI0316 21:22:23.084646 655 log.go:172] (0xc000b6a000) (0xc000920140) Stream removed, broadcasting: 5\n" Mar 16 21:22:23.089: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 21:22:23.089: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 16 21:22:43.110: INFO: Deleting all statefulset in ns statefulset-7504 Mar 16 21:22:43.112: INFO: Scaling statefulset ss2 to 0 Mar 16 21:23:03.130: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 21:23:03.133: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:23:03.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7504" for this suite. • [SLOW TEST:131.323 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":53,"skipped":1032,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:23:03.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 21:23:03.239: INFO: Waiting up to 5m0s for pod "downwardapi-volume-026a5dab-d955-443a-8a7e-fc5f7bee099a" in namespace "projected-6684" to be "success or failure" Mar 16 21:23:03.257: INFO: Pod "downwardapi-volume-026a5dab-d955-443a-8a7e-fc5f7bee099a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.014134ms Mar 16 21:23:05.261: INFO: Pod "downwardapi-volume-026a5dab-d955-443a-8a7e-fc5f7bee099a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021704417s Mar 16 21:23:07.265: INFO: Pod "downwardapi-volume-026a5dab-d955-443a-8a7e-fc5f7bee099a": Phase="Running", Reason="", readiness=true. Elapsed: 4.026099548s Mar 16 21:23:09.268: INFO: Pod "downwardapi-volume-026a5dab-d955-443a-8a7e-fc5f7bee099a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029674508s STEP: Saw pod success Mar 16 21:23:09.269: INFO: Pod "downwardapi-volume-026a5dab-d955-443a-8a7e-fc5f7bee099a" satisfied condition "success or failure" Mar 16 21:23:09.271: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-026a5dab-d955-443a-8a7e-fc5f7bee099a container client-container: STEP: delete the pod Mar 16 21:23:09.324: INFO: Waiting for pod downwardapi-volume-026a5dab-d955-443a-8a7e-fc5f7bee099a to disappear Mar 16 21:23:09.341: INFO: Pod downwardapi-volume-026a5dab-d955-443a-8a7e-fc5f7bee099a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:23:09.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6684" for this suite. • [SLOW TEST:6.197 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":1035,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:23:09.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 21:23:10.240: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 21:23:12.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990590, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990590, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990590, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990590, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:23:15.282: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:23:25.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2005" for this suite. STEP: Destroying namespace "webhook-2005-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.272 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":55,"skipped":1041,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:23:25.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 21:23:26.618: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 21:23:28.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990606, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990606, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990606, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990606, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:23:31.702: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:23:31.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5832-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:23:32.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7904" for this suite. STEP: Destroying namespace "webhook-7904-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.342 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":56,"skipped":1055,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:23:32.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:23:37.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1664" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1082,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:23:37.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:23:54.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-900" for this suite. • [SLOW TEST:17.128 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":58,"skipped":1087,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:23:54.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 21:23:54.288: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5aa30d9b-1abb-44c0-8cd5-810a8851539a" in namespace "downward-api-7319" to be "success or failure" Mar 16 21:23:54.292: INFO: Pod "downwardapi-volume-5aa30d9b-1abb-44c0-8cd5-810a8851539a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098102ms Mar 16 21:23:56.297: INFO: Pod "downwardapi-volume-5aa30d9b-1abb-44c0-8cd5-810a8851539a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008674709s Mar 16 21:23:58.301: INFO: Pod "downwardapi-volume-5aa30d9b-1abb-44c0-8cd5-810a8851539a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013348415s STEP: Saw pod success Mar 16 21:23:58.302: INFO: Pod "downwardapi-volume-5aa30d9b-1abb-44c0-8cd5-810a8851539a" satisfied condition "success or failure" Mar 16 21:23:58.305: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5aa30d9b-1abb-44c0-8cd5-810a8851539a container client-container: STEP: delete the pod Mar 16 21:23:58.324: INFO: Waiting for pod downwardapi-volume-5aa30d9b-1abb-44c0-8cd5-810a8851539a to disappear Mar 16 21:23:58.328: INFO: Pod downwardapi-volume-5aa30d9b-1abb-44c0-8cd5-810a8851539a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:23:58.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7319" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1092,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:23:58.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-dcc80e11-b1e0-41df-a574-6e2df16f56e5 STEP: Creating a pod to test consume configMaps Mar 16 21:23:58.461: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b83c9422-47f2-4bbc-b081-245f31c466dc" in namespace "projected-9179" to be "success or failure" Mar 16 21:23:58.482: INFO: Pod "pod-projected-configmaps-b83c9422-47f2-4bbc-b081-245f31c466dc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.572804ms Mar 16 21:24:00.486: INFO: Pod "pod-projected-configmaps-b83c9422-47f2-4bbc-b081-245f31c466dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024756554s Mar 16 21:24:02.491: INFO: Pod "pod-projected-configmaps-b83c9422-47f2-4bbc-b081-245f31c466dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029164666s STEP: Saw pod success Mar 16 21:24:02.491: INFO: Pod "pod-projected-configmaps-b83c9422-47f2-4bbc-b081-245f31c466dc" satisfied condition "success or failure" Mar 16 21:24:02.494: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-b83c9422-47f2-4bbc-b081-245f31c466dc container projected-configmap-volume-test: STEP: delete the pod Mar 16 21:24:02.515: INFO: Waiting for pod pod-projected-configmaps-b83c9422-47f2-4bbc-b081-245f31c466dc to disappear Mar 16 21:24:02.520: INFO: Pod pod-projected-configmaps-b83c9422-47f2-4bbc-b081-245f31c466dc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:24:02.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9179" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1103,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:24:02.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 16 21:24:02.588: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 16 21:24:07.599: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:24:08.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2781" for this suite. • [SLOW TEST:6.172 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":61,"skipped":1106,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:24:08.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 16 21:24:08.818: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-847" to be "success or failure" Mar 16 21:24:08.822: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.916936ms Mar 16 21:24:10.827: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008398907s Mar 16 21:24:12.831: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012763732s Mar 16 21:24:14.835: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016539518s STEP: Saw pod success Mar 16 21:24:14.835: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 16 21:24:14.837: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 16 21:24:14.900: INFO: Waiting for pod pod-host-path-test to disappear Mar 16 21:24:14.904: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:24:14.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-847" for this suite. • [SLOW TEST:6.212 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1128,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:24:14.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-3f4f5437-77a7-4dfb-8ec0-6ed725e1a19d in namespace container-probe-1265 Mar 16 21:24:19.016: INFO: Started pod liveness-3f4f5437-77a7-4dfb-8ec0-6ed725e1a19d in namespace container-probe-1265 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 21:24:19.018: INFO: Initial restart count of pod liveness-3f4f5437-77a7-4dfb-8ec0-6ed725e1a19d is 0 Mar 16 21:24:29.042: INFO: Restart count of pod container-probe-1265/liveness-3f4f5437-77a7-4dfb-8ec0-6ed725e1a19d is now 1 (10.023647512s elapsed) Mar 16 21:24:49.085: INFO: Restart count of pod container-probe-1265/liveness-3f4f5437-77a7-4dfb-8ec0-6ed725e1a19d is now 2 (30.066852388s elapsed) Mar 16 21:25:11.130: INFO: Restart count of pod container-probe-1265/liveness-3f4f5437-77a7-4dfb-8ec0-6ed725e1a19d is now 3 (52.111985261s elapsed) Mar 16 21:25:29.207: INFO: Restart count of pod container-probe-1265/liveness-3f4f5437-77a7-4dfb-8ec0-6ed725e1a19d is now 4 (1m10.188360063s elapsed) Mar 16 21:26:31.347: INFO: Restart count of pod container-probe-1265/liveness-3f4f5437-77a7-4dfb-8ec0-6ed725e1a19d is now 5 (2m12.328605083s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:26:31.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1265" for this suite. • [SLOW TEST:136.475 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1138,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:26:31.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 21:26:31.448: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9b174e8-80c5-4c33-9063-7c89ba778fd6" in namespace "downward-api-6818" to be "success or failure" Mar 16 21:26:31.459: INFO: Pod "downwardapi-volume-c9b174e8-80c5-4c33-9063-7c89ba778fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.457969ms Mar 16 21:26:33.463: INFO: Pod "downwardapi-volume-c9b174e8-80c5-4c33-9063-7c89ba778fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015043194s Mar 16 21:26:35.468: INFO: Pod "downwardapi-volume-c9b174e8-80c5-4c33-9063-7c89ba778fd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019363711s STEP: Saw pod success Mar 16 21:26:35.468: INFO: Pod "downwardapi-volume-c9b174e8-80c5-4c33-9063-7c89ba778fd6" satisfied condition "success or failure" Mar 16 21:26:35.471: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c9b174e8-80c5-4c33-9063-7c89ba778fd6 container client-container: STEP: delete the pod Mar 16 21:26:35.502: INFO: Waiting for pod downwardapi-volume-c9b174e8-80c5-4c33-9063-7c89ba778fd6 to disappear Mar 16 21:26:35.506: INFO: Pod downwardapi-volume-c9b174e8-80c5-4c33-9063-7c89ba778fd6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:26:35.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6818" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1152,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:26:35.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 16 21:26:35.615: INFO: Waiting up to 5m0s for pod "pod-d5c93202-0eb0-4807-a36e-72bf5cb7c118" in namespace "emptydir-1057" to be "success or failure" Mar 16 21:26:35.645: INFO: Pod "pod-d5c93202-0eb0-4807-a36e-72bf5cb7c118": Phase="Pending", Reason="", readiness=false. Elapsed: 29.84357ms Mar 16 21:26:37.657: INFO: Pod "pod-d5c93202-0eb0-4807-a36e-72bf5cb7c118": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041998723s Mar 16 21:26:39.662: INFO: Pod "pod-d5c93202-0eb0-4807-a36e-72bf5cb7c118": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046417969s STEP: Saw pod success Mar 16 21:26:39.662: INFO: Pod "pod-d5c93202-0eb0-4807-a36e-72bf5cb7c118" satisfied condition "success or failure" Mar 16 21:26:39.665: INFO: Trying to get logs from node jerma-worker2 pod pod-d5c93202-0eb0-4807-a36e-72bf5cb7c118 container test-container: STEP: delete the pod Mar 16 21:26:39.712: INFO: Waiting for pod pod-d5c93202-0eb0-4807-a36e-72bf5cb7c118 to disappear Mar 16 21:26:39.716: INFO: Pod pod-d5c93202-0eb0-4807-a36e-72bf5cb7c118 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:26:39.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1057" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1154,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:26:39.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 16 21:26:47.824: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 21:26:47.829: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 21:26:49.829: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 21:26:49.833: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 21:26:51.829: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 21:26:51.834: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 21:26:53.829: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 21:26:53.834: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 21:26:55.829: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 21:26:55.834: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 21:26:57.829: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 21:26:57.834: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 21:26:59.830: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 21:26:59.834: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:26:59.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-837" for this suite. • [SLOW TEST:20.120 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:26:59.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 16 21:27:03.952: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2148 PodName:pod-sharedvolume-5388be32-0321-4fbc-a12d-422c79160056 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:27:03.952: INFO: >>> kubeConfig: /root/.kube/config I0316 21:27:03.982685 6 log.go:172] (0xc002ad0630) (0xc000b3cbe0) Create stream I0316 21:27:03.982710 6 log.go:172] (0xc002ad0630) (0xc000b3cbe0) Stream added, broadcasting: 1 I0316 21:27:03.984509 6 log.go:172] (0xc002ad0630) Reply frame received for 1 I0316 21:27:03.984551 6 log.go:172] (0xc002ad0630) (0xc000c194a0) Create stream I0316 21:27:03.984571 6 log.go:172] (0xc002ad0630) (0xc000c194a0) Stream added, broadcasting: 3 I0316 21:27:03.985461 6 log.go:172] (0xc002ad0630) Reply frame received for 3 I0316 21:27:03.985487 6 log.go:172] (0xc002ad0630) (0xc000b3cf00) Create stream I0316 21:27:03.985500 6 log.go:172] (0xc002ad0630) (0xc000b3cf00) Stream added, broadcasting: 5 I0316 21:27:03.986262 6 log.go:172] (0xc002ad0630) Reply frame received for 5 I0316 21:27:04.040748 6 log.go:172] (0xc002ad0630) Data frame received for 5 I0316 21:27:04.040804 6 log.go:172] (0xc000b3cf00) (5) Data frame handling I0316 21:27:04.040840 6 log.go:172] (0xc002ad0630) Data frame received for 3 I0316 21:27:04.040858 6 log.go:172] (0xc000c194a0) (3) Data frame handling I0316 21:27:04.040871 6 log.go:172] (0xc000c194a0) (3) Data frame sent I0316 21:27:04.040946 6 log.go:172] (0xc002ad0630) Data frame received for 3 I0316 21:27:04.040986 6 log.go:172] (0xc000c194a0) (3) Data frame handling I0316 21:27:04.042639 6 log.go:172] (0xc002ad0630) Data frame received for 1 I0316 21:27:04.042664 6 log.go:172] (0xc000b3cbe0) (1) Data frame handling I0316 21:27:04.042680 6 log.go:172] (0xc000b3cbe0) (1) Data frame sent I0316 21:27:04.042692 6 log.go:172] (0xc002ad0630) (0xc000b3cbe0) Stream removed, broadcasting: 1 I0316 21:27:04.042732 6 log.go:172] (0xc002ad0630) Go away received I0316 21:27:04.042779 6 log.go:172] (0xc002ad0630) (0xc000b3cbe0) Stream removed, broadcasting: 1 I0316 21:27:04.042797 6 log.go:172] (0xc002ad0630) (0xc000c194a0) Stream removed, broadcasting: 3 I0316 21:27:04.042807 6 log.go:172] (0xc002ad0630) (0xc000b3cf00) Stream removed, broadcasting: 5 Mar 16 21:27:04.042: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:27:04.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2148" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":67,"skipped":1206,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:27:04.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:27:04.141: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:27:08.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5503" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1222,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:27:08.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-089686a5-4240-41d3-8e76-0d7ab9d38765 in namespace container-probe-8777 Mar 16 21:27:12.428: INFO: Started pod liveness-089686a5-4240-41d3-8e76-0d7ab9d38765 in namespace container-probe-8777 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 21:27:12.431: INFO: Initial restart count of pod liveness-089686a5-4240-41d3-8e76-0d7ab9d38765 is 0 Mar 16 21:27:30.471: INFO: Restart count of pod container-probe-8777/liveness-089686a5-4240-41d3-8e76-0d7ab9d38765 is now 1 (18.039415101s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:27:30.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8777" for this suite. • [SLOW TEST:22.244 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:27:30.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-7487 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7487 to expose endpoints map[] Mar 16 21:27:31.011: INFO: Get endpoints failed (17.064654ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 16 21:27:32.015: INFO: successfully validated that service multi-endpoint-test in namespace services-7487 exposes endpoints map[] (1.021159951s elapsed) STEP: Creating pod pod1 in namespace services-7487 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7487 to expose endpoints map[pod1:[100]] Mar 16 21:27:35.134: INFO: successfully validated that service multi-endpoint-test in namespace services-7487 exposes endpoints map[pod1:[100]] (3.111166708s elapsed) STEP: Creating pod pod2 in namespace services-7487 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7487 to expose endpoints map[pod1:[100] pod2:[101]] Mar 16 21:27:38.238: INFO: successfully validated that service multi-endpoint-test in namespace services-7487 exposes endpoints map[pod1:[100] pod2:[101]] (3.082990082s elapsed) STEP: Deleting pod pod1 in namespace services-7487 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7487 to expose endpoints map[pod2:[101]] Mar 16 21:27:38.298: INFO: successfully validated that service multi-endpoint-test in namespace services-7487 exposes endpoints map[pod2:[101]] (55.786234ms elapsed) STEP: Deleting pod pod2 in namespace services-7487 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7487 to expose endpoints map[] Mar 16 21:27:39.315: INFO: successfully validated that service multi-endpoint-test in namespace services-7487 exposes endpoints map[] (1.012899084s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:27:39.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7487" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:8.828 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":70,"skipped":1249,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:27:39.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 21:27:39.880: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 21:27:41.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990859, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990859, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990859, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990859, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:27:44.927: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:27:44.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5266-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:27:46.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4287" for this suite. STEP: Destroying namespace "webhook-4287-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.868 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":71,"skipped":1261,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:27:46.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 21:27:47.022: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 21:27:49.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990867, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990867, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990867, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719990867, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:27:52.063: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:27:52.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6935" for this suite. STEP: Destroying namespace "webhook-6935-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.964 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":72,"skipped":1280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:27:52.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:27:56.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6253" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1372,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:27:56.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 21:27:56.494: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a11140c-0b82-42d4-b1cd-323beb20c299" in namespace "projected-1252" to be "success or failure" Mar 16 21:27:56.497: INFO: Pod "downwardapi-volume-4a11140c-0b82-42d4-b1cd-323beb20c299": Phase="Pending", Reason="", readiness=false. Elapsed: 3.080183ms Mar 16 21:27:58.501: INFO: Pod "downwardapi-volume-4a11140c-0b82-42d4-b1cd-323beb20c299": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007129734s Mar 16 21:28:00.505: INFO: Pod "downwardapi-volume-4a11140c-0b82-42d4-b1cd-323beb20c299": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011353624s STEP: Saw pod success Mar 16 21:28:00.505: INFO: Pod "downwardapi-volume-4a11140c-0b82-42d4-b1cd-323beb20c299" satisfied condition "success or failure" Mar 16 21:28:00.508: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4a11140c-0b82-42d4-b1cd-323beb20c299 container client-container: STEP: delete the pod Mar 16 21:28:00.539: INFO: Waiting for pod downwardapi-volume-4a11140c-0b82-42d4-b1cd-323beb20c299 to disappear Mar 16 21:28:00.568: INFO: Pod downwardapi-volume-4a11140c-0b82-42d4-b1cd-323beb20c299 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:28:00.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1252" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1392,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:28:00.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 16 21:28:05.171: INFO: Successfully updated pod "adopt-release-52gpf" STEP: Checking that the Job readopts the Pod Mar 16 21:28:05.171: INFO: Waiting up to 15m0s for pod "adopt-release-52gpf" in namespace "job-2475" to be "adopted" Mar 16 21:28:05.179: INFO: Pod "adopt-release-52gpf": Phase="Running", Reason="", readiness=true. Elapsed: 8.0455ms Mar 16 21:28:07.183: INFO: Pod "adopt-release-52gpf": Phase="Running", Reason="", readiness=true. Elapsed: 2.01179827s Mar 16 21:28:07.183: INFO: Pod "adopt-release-52gpf" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 16 21:28:07.692: INFO: Successfully updated pod "adopt-release-52gpf" STEP: Checking that the Job releases the Pod Mar 16 21:28:07.692: INFO: Waiting up to 15m0s for pod "adopt-release-52gpf" in namespace "job-2475" to be "released" Mar 16 21:28:07.708: INFO: Pod "adopt-release-52gpf": Phase="Running", Reason="", readiness=true. Elapsed: 16.695464ms Mar 16 21:28:09.717: INFO: Pod "adopt-release-52gpf": Phase="Running", Reason="", readiness=true. Elapsed: 2.025131339s Mar 16 21:28:09.717: INFO: Pod "adopt-release-52gpf" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:28:09.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2475" for this suite. • [SLOW TEST:9.148 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":75,"skipped":1413,"failed":0} [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:28:09.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:28:09.834: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3695 I0316 21:28:09.850409 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3695, replica count: 1 I0316 21:28:10.900801 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 21:28:11.901058 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 21:28:12.901276 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 21:28:13.901502 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 21:28:14.030: INFO: Created: latency-svc-8vglj Mar 16 21:28:14.045: INFO: Got endpoints: latency-svc-8vglj [43.367701ms] Mar 16 21:28:14.116: INFO: Created: latency-svc-cqbmx Mar 16 21:28:14.131: INFO: Got endpoints: latency-svc-cqbmx [86.292733ms] Mar 16 21:28:14.161: INFO: Created: latency-svc-qltp4 Mar 16 21:28:14.175: INFO: Got endpoints: latency-svc-qltp4 [130.049868ms] Mar 16 21:28:14.197: INFO: Created: latency-svc-s8qvb Mar 16 21:28:14.212: INFO: Got endpoints: latency-svc-s8qvb [167.503352ms] Mar 16 21:28:14.259: INFO: Created: latency-svc-m9ffp Mar 16 21:28:14.265: INFO: Got endpoints: latency-svc-m9ffp [220.015951ms] Mar 16 21:28:14.293: INFO: Created: latency-svc-7q5tt Mar 16 21:28:14.308: INFO: Got endpoints: latency-svc-7q5tt [262.489426ms] Mar 16 21:28:14.335: INFO: Created: latency-svc-s2m8x Mar 16 21:28:14.350: INFO: Got endpoints: latency-svc-s2m8x [304.802268ms] Mar 16 21:28:14.401: INFO: Created: latency-svc-7vkhk Mar 16 21:28:14.405: INFO: Got endpoints: latency-svc-7vkhk [360.006643ms] Mar 16 21:28:14.443: INFO: Created: latency-svc-wdx9x Mar 16 21:28:14.458: INFO: Got endpoints: latency-svc-wdx9x [412.971015ms] Mar 16 21:28:14.479: INFO: Created: latency-svc-s877s Mar 16 21:28:14.494: INFO: Got endpoints: latency-svc-s877s [449.205424ms] Mar 16 21:28:14.533: INFO: Created: latency-svc-kbkfj Mar 16 21:28:14.563: INFO: Got endpoints: latency-svc-kbkfj [518.06686ms] Mar 16 21:28:14.594: INFO: Created: latency-svc-md4xr Mar 16 21:28:14.603: INFO: Got endpoints: latency-svc-md4xr [558.094946ms] Mar 16 21:28:14.713: INFO: Created: latency-svc-6sk5k Mar 16 21:28:14.717: INFO: Got endpoints: latency-svc-6sk5k [671.459785ms] Mar 16 21:28:14.750: INFO: Created: latency-svc-cc6vx Mar 16 21:28:14.782: INFO: Got endpoints: latency-svc-cc6vx [736.541318ms] Mar 16 21:28:14.803: INFO: Created: latency-svc-k6nnx Mar 16 21:28:14.862: INFO: Got endpoints: latency-svc-k6nnx [817.068063ms] Mar 16 21:28:14.880: INFO: Created: latency-svc-6l65f Mar 16 21:28:14.898: INFO: Got endpoints: latency-svc-6l65f [853.028618ms] Mar 16 21:28:14.923: INFO: Created: latency-svc-vfxvn Mar 16 21:28:14.940: INFO: Got endpoints: latency-svc-vfxvn [808.943923ms] Mar 16 21:28:15.006: INFO: Created: latency-svc-c69vt Mar 16 21:28:15.031: INFO: Created: latency-svc-54p4b Mar 16 21:28:15.031: INFO: Got endpoints: latency-svc-c69vt [856.186998ms] Mar 16 21:28:15.049: INFO: Got endpoints: latency-svc-54p4b [836.364561ms] Mar 16 21:28:15.067: INFO: Created: latency-svc-9qv7z Mar 16 21:28:15.080: INFO: Got endpoints: latency-svc-9qv7z [814.622249ms] Mar 16 21:28:15.150: INFO: Created: latency-svc-ctht4 Mar 16 21:28:15.157: INFO: Got endpoints: latency-svc-ctht4 [849.266336ms] Mar 16 21:28:15.181: INFO: Created: latency-svc-mznst Mar 16 21:28:15.205: INFO: Got endpoints: latency-svc-mznst [854.902866ms] Mar 16 21:28:15.229: INFO: Created: latency-svc-hnrbj Mar 16 21:28:15.241: INFO: Got endpoints: latency-svc-hnrbj [836.31577ms] Mar 16 21:28:15.281: INFO: Created: latency-svc-jjdgg Mar 16 21:28:15.290: INFO: Got endpoints: latency-svc-jjdgg [831.704653ms] Mar 16 21:28:15.319: INFO: Created: latency-svc-jmc6n Mar 16 21:28:15.332: INFO: Got endpoints: latency-svc-jmc6n [837.950459ms] Mar 16 21:28:15.362: INFO: Created: latency-svc-5njdp Mar 16 21:28:15.374: INFO: Got endpoints: latency-svc-5njdp [810.911168ms] Mar 16 21:28:15.443: INFO: Created: latency-svc-2ngxj Mar 16 21:28:15.446: INFO: Got endpoints: latency-svc-2ngxj [842.790681ms] Mar 16 21:28:15.474: INFO: Created: latency-svc-kr2xh Mar 16 21:28:15.489: INFO: Got endpoints: latency-svc-kr2xh [772.39225ms] Mar 16 21:28:15.517: INFO: Created: latency-svc-fh2cm Mar 16 21:28:15.525: INFO: Got endpoints: latency-svc-fh2cm [743.622837ms] Mar 16 21:28:15.587: INFO: Created: latency-svc-t4k2b Mar 16 21:28:15.590: INFO: Got endpoints: latency-svc-t4k2b [727.658621ms] Mar 16 21:28:15.625: INFO: Created: latency-svc-2vg7x Mar 16 21:28:15.655: INFO: Got endpoints: latency-svc-2vg7x [756.911976ms] Mar 16 21:28:15.724: INFO: Created: latency-svc-w92hz Mar 16 21:28:15.731: INFO: Got endpoints: latency-svc-w92hz [790.438937ms] Mar 16 21:28:15.763: INFO: Created: latency-svc-874wc Mar 16 21:28:15.784: INFO: Got endpoints: latency-svc-874wc [752.812062ms] Mar 16 21:28:15.823: INFO: Created: latency-svc-lpnbk Mar 16 21:28:15.856: INFO: Got endpoints: latency-svc-lpnbk [807.085222ms] Mar 16 21:28:15.912: INFO: Created: latency-svc-hz5fv Mar 16 21:28:15.929: INFO: Got endpoints: latency-svc-hz5fv [849.485836ms] Mar 16 21:28:16.006: INFO: Created: latency-svc-t78mp Mar 16 21:28:16.013: INFO: Got endpoints: latency-svc-t78mp [856.457948ms] Mar 16 21:28:16.050: INFO: Created: latency-svc-q2mgq Mar 16 21:28:16.067: INFO: Got endpoints: latency-svc-q2mgq [862.072139ms] Mar 16 21:28:16.095: INFO: Created: latency-svc-xk66x Mar 16 21:28:16.138: INFO: Got endpoints: latency-svc-xk66x [896.260406ms] Mar 16 21:28:16.164: INFO: Created: latency-svc-fz292 Mar 16 21:28:16.195: INFO: Got endpoints: latency-svc-fz292 [904.990505ms] Mar 16 21:28:16.225: INFO: Created: latency-svc-d9lhf Mar 16 21:28:16.281: INFO: Got endpoints: latency-svc-d9lhf [948.610281ms] Mar 16 21:28:16.302: INFO: Created: latency-svc-zp6mx Mar 16 21:28:16.314: INFO: Got endpoints: latency-svc-zp6mx [940.06574ms] Mar 16 21:28:16.350: INFO: Created: latency-svc-22fw7 Mar 16 21:28:16.363: INFO: Got endpoints: latency-svc-22fw7 [916.662942ms] Mar 16 21:28:16.444: INFO: Created: latency-svc-g8lg6 Mar 16 21:28:16.453: INFO: Got endpoints: latency-svc-g8lg6 [963.624653ms] Mar 16 21:28:16.470: INFO: Created: latency-svc-vdqdh Mar 16 21:28:16.483: INFO: Got endpoints: latency-svc-vdqdh [957.562618ms] Mar 16 21:28:16.506: INFO: Created: latency-svc-qdcbd Mar 16 21:28:16.542: INFO: Got endpoints: latency-svc-qdcbd [952.440773ms] Mar 16 21:28:16.920: INFO: Created: latency-svc-mfchv Mar 16 21:28:16.933: INFO: Got endpoints: latency-svc-mfchv [1.277922296s] Mar 16 21:28:17.348: INFO: Created: latency-svc-9bgr7 Mar 16 21:28:17.354: INFO: Got endpoints: latency-svc-9bgr7 [1.622857555s] Mar 16 21:28:17.375: INFO: Created: latency-svc-kzjwt Mar 16 21:28:17.389: INFO: Got endpoints: latency-svc-kzjwt [1.604173926s] Mar 16 21:28:17.435: INFO: Created: latency-svc-q7bkl Mar 16 21:28:17.485: INFO: Got endpoints: latency-svc-q7bkl [1.62915053s] Mar 16 21:28:17.507: INFO: Created: latency-svc-mf788 Mar 16 21:28:17.521: INFO: Got endpoints: latency-svc-mf788 [1.592064475s] Mar 16 21:28:17.574: INFO: Created: latency-svc-n78l8 Mar 16 21:28:17.611: INFO: Got endpoints: latency-svc-n78l8 [1.597369166s] Mar 16 21:28:17.621: INFO: Created: latency-svc-4dshk Mar 16 21:28:17.651: INFO: Got endpoints: latency-svc-4dshk [1.583737212s] Mar 16 21:28:17.700: INFO: Created: latency-svc-2spbr Mar 16 21:28:17.742: INFO: Got endpoints: latency-svc-2spbr [1.60467996s] Mar 16 21:28:17.747: INFO: Created: latency-svc-hrflf Mar 16 21:28:17.777: INFO: Got endpoints: latency-svc-hrflf [1.58234246s] Mar 16 21:28:17.820: INFO: Created: latency-svc-sbkp8 Mar 16 21:28:17.892: INFO: Got endpoints: latency-svc-sbkp8 [1.610882593s] Mar 16 21:28:17.895: INFO: Created: latency-svc-hrlwt Mar 16 21:28:17.900: INFO: Got endpoints: latency-svc-hrlwt [1.585843374s] Mar 16 21:28:17.921: INFO: Created: latency-svc-6v6bg Mar 16 21:28:17.931: INFO: Got endpoints: latency-svc-6v6bg [1.567710267s] Mar 16 21:28:17.951: INFO: Created: latency-svc-dxw6n Mar 16 21:28:17.967: INFO: Got endpoints: latency-svc-dxw6n [1.51398987s] Mar 16 21:28:17.987: INFO: Created: latency-svc-ncllq Mar 16 21:28:18.024: INFO: Got endpoints: latency-svc-ncllq [1.540588918s] Mar 16 21:28:18.035: INFO: Created: latency-svc-hvmcj Mar 16 21:28:18.051: INFO: Got endpoints: latency-svc-hvmcj [1.509003592s] Mar 16 21:28:18.077: INFO: Created: latency-svc-bqlmt Mar 16 21:28:18.088: INFO: Got endpoints: latency-svc-bqlmt [1.154924645s] Mar 16 21:28:18.113: INFO: Created: latency-svc-xwzxd Mar 16 21:28:18.167: INFO: Got endpoints: latency-svc-xwzxd [813.669909ms] Mar 16 21:28:18.172: INFO: Created: latency-svc-pw42s Mar 16 21:28:18.178: INFO: Got endpoints: latency-svc-pw42s [789.508324ms] Mar 16 21:28:18.203: INFO: Created: latency-svc-jw7k7 Mar 16 21:28:18.221: INFO: Got endpoints: latency-svc-jw7k7 [735.996065ms] Mar 16 21:28:18.239: INFO: Created: latency-svc-tv5mg Mar 16 21:28:18.257: INFO: Got endpoints: latency-svc-tv5mg [735.323657ms] Mar 16 21:28:18.317: INFO: Created: latency-svc-hxj78 Mar 16 21:28:18.330: INFO: Got endpoints: latency-svc-hxj78 [719.255574ms] Mar 16 21:28:18.377: INFO: Created: latency-svc-kkqbv Mar 16 21:28:18.390: INFO: Got endpoints: latency-svc-kkqbv [739.00154ms] Mar 16 21:28:18.467: INFO: Created: latency-svc-l5jgq Mar 16 21:28:18.480: INFO: Got endpoints: latency-svc-l5jgq [737.325146ms] Mar 16 21:28:18.521: INFO: Created: latency-svc-lclbm Mar 16 21:28:18.534: INFO: Got endpoints: latency-svc-lclbm [756.60818ms] Mar 16 21:28:18.605: INFO: Created: latency-svc-txbv5 Mar 16 21:28:18.607: INFO: Got endpoints: latency-svc-txbv5 [715.561158ms] Mar 16 21:28:18.636: INFO: Created: latency-svc-x4gdf Mar 16 21:28:18.677: INFO: Got endpoints: latency-svc-x4gdf [776.688411ms] Mar 16 21:28:18.749: INFO: Created: latency-svc-68qkc Mar 16 21:28:18.752: INFO: Got endpoints: latency-svc-68qkc [821.199614ms] Mar 16 21:28:18.785: INFO: Created: latency-svc-wpr5p Mar 16 21:28:18.820: INFO: Got endpoints: latency-svc-wpr5p [853.515863ms] Mar 16 21:28:18.880: INFO: Created: latency-svc-cf2sk Mar 16 21:28:18.911: INFO: Created: latency-svc-27zrb Mar 16 21:28:18.911: INFO: Got endpoints: latency-svc-cf2sk [887.679786ms] Mar 16 21:28:18.941: INFO: Got endpoints: latency-svc-27zrb [889.354065ms] Mar 16 21:28:18.971: INFO: Created: latency-svc-bc9k8 Mar 16 21:28:18.979: INFO: Got endpoints: latency-svc-bc9k8 [891.021948ms] Mar 16 21:28:19.055: INFO: Created: latency-svc-xrz28 Mar 16 21:28:19.069: INFO: Got endpoints: latency-svc-xrz28 [902.098215ms] Mar 16 21:28:19.092: INFO: Created: latency-svc-fjhms Mar 16 21:28:19.106: INFO: Got endpoints: latency-svc-fjhms [927.440874ms] Mar 16 21:28:19.127: INFO: Created: latency-svc-7spgd Mar 16 21:28:19.191: INFO: Got endpoints: latency-svc-7spgd [969.99932ms] Mar 16 21:28:19.217: INFO: Created: latency-svc-mnpbw Mar 16 21:28:19.226: INFO: Got endpoints: latency-svc-mnpbw [969.410258ms] Mar 16 21:28:19.247: INFO: Created: latency-svc-rvsrf Mar 16 21:28:19.262: INFO: Got endpoints: latency-svc-rvsrf [932.306791ms] Mar 16 21:28:19.283: INFO: Created: latency-svc-kl69b Mar 16 21:28:19.341: INFO: Got endpoints: latency-svc-kl69b [951.241762ms] Mar 16 21:28:19.343: INFO: Created: latency-svc-pn4x6 Mar 16 21:28:19.353: INFO: Got endpoints: latency-svc-pn4x6 [873.305684ms] Mar 16 21:28:19.373: INFO: Created: latency-svc-22pbg Mar 16 21:28:19.389: INFO: Got endpoints: latency-svc-22pbg [855.327982ms] Mar 16 21:28:19.408: INFO: Created: latency-svc-fj552 Mar 16 21:28:19.427: INFO: Got endpoints: latency-svc-fj552 [819.869531ms] Mar 16 21:28:19.491: INFO: Created: latency-svc-mh5h2 Mar 16 21:28:19.494: INFO: Got endpoints: latency-svc-mh5h2 [816.863759ms] Mar 16 21:28:19.522: INFO: Created: latency-svc-clvc4 Mar 16 21:28:19.540: INFO: Got endpoints: latency-svc-clvc4 [788.239656ms] Mar 16 21:28:19.559: INFO: Created: latency-svc-sqkmm Mar 16 21:28:19.576: INFO: Got endpoints: latency-svc-sqkmm [755.843874ms] Mar 16 21:28:19.631: INFO: Created: latency-svc-zgpwj Mar 16 21:28:19.661: INFO: Got endpoints: latency-svc-zgpwj [749.734921ms] Mar 16 21:28:19.684: INFO: Created: latency-svc-ndfwn Mar 16 21:28:19.697: INFO: Got endpoints: latency-svc-ndfwn [755.929676ms] Mar 16 21:28:19.791: INFO: Created: latency-svc-n4sh6 Mar 16 21:28:19.793: INFO: Got endpoints: latency-svc-n4sh6 [814.19424ms] Mar 16 21:28:19.853: INFO: Created: latency-svc-clj5r Mar 16 21:28:19.865: INFO: Got endpoints: latency-svc-clj5r [795.240389ms] Mar 16 21:28:19.883: INFO: Created: latency-svc-qw9gw Mar 16 21:28:19.928: INFO: Got endpoints: latency-svc-qw9gw [822.170856ms] Mar 16 21:28:19.955: INFO: Created: latency-svc-6ngqw Mar 16 21:28:19.967: INFO: Got endpoints: latency-svc-6ngqw [776.073363ms] Mar 16 21:28:19.984: INFO: Created: latency-svc-6v78d Mar 16 21:28:20.008: INFO: Got endpoints: latency-svc-6v78d [782.129516ms] Mar 16 21:28:20.072: INFO: Created: latency-svc-w78th Mar 16 21:28:20.075: INFO: Got endpoints: latency-svc-w78th [812.71124ms] Mar 16 21:28:20.105: INFO: Created: latency-svc-5j2cm Mar 16 21:28:20.118: INFO: Got endpoints: latency-svc-5j2cm [777.342128ms] Mar 16 21:28:20.141: INFO: Created: latency-svc-jmkq2 Mar 16 21:28:20.155: INFO: Got endpoints: latency-svc-jmkq2 [801.59625ms] Mar 16 21:28:20.215: INFO: Created: latency-svc-pkpr8 Mar 16 21:28:20.219: INFO: Got endpoints: latency-svc-pkpr8 [829.557426ms] Mar 16 21:28:20.248: INFO: Created: latency-svc-z78t5 Mar 16 21:28:20.263: INFO: Got endpoints: latency-svc-z78t5 [835.494359ms] Mar 16 21:28:20.284: INFO: Created: latency-svc-d66kq Mar 16 21:28:20.299: INFO: Got endpoints: latency-svc-d66kq [805.544133ms] Mar 16 21:28:20.359: INFO: Created: latency-svc-nfx88 Mar 16 21:28:20.363: INFO: Got endpoints: latency-svc-nfx88 [822.517066ms] Mar 16 21:28:20.399: INFO: Created: latency-svc-r55vj Mar 16 21:28:20.414: INFO: Got endpoints: latency-svc-r55vj [837.723255ms] Mar 16 21:28:20.440: INFO: Created: latency-svc-tdwm6 Mar 16 21:28:20.456: INFO: Got endpoints: latency-svc-tdwm6 [794.879847ms] Mar 16 21:28:20.503: INFO: Created: latency-svc-qjqbm Mar 16 21:28:20.510: INFO: Got endpoints: latency-svc-qjqbm [813.37909ms] Mar 16 21:28:20.548: INFO: Created: latency-svc-h8s5r Mar 16 21:28:20.571: INFO: Got endpoints: latency-svc-h8s5r [777.251795ms] Mar 16 21:28:20.596: INFO: Created: latency-svc-5qbv8 Mar 16 21:28:20.664: INFO: Got endpoints: latency-svc-5qbv8 [799.237384ms] Mar 16 21:28:20.693: INFO: Created: latency-svc-kmq5b Mar 16 21:28:20.715: INFO: Got endpoints: latency-svc-kmq5b [787.220859ms] Mar 16 21:28:20.752: INFO: Created: latency-svc-sjwkh Mar 16 21:28:20.826: INFO: Got endpoints: latency-svc-sjwkh [858.464093ms] Mar 16 21:28:20.828: INFO: Created: latency-svc-zzz7s Mar 16 21:28:20.841: INFO: Got endpoints: latency-svc-zzz7s [832.960536ms] Mar 16 21:28:20.861: INFO: Created: latency-svc-2fsb7 Mar 16 21:28:20.884: INFO: Got endpoints: latency-svc-2fsb7 [808.580527ms] Mar 16 21:28:20.902: INFO: Created: latency-svc-fgh2f Mar 16 21:28:20.914: INFO: Got endpoints: latency-svc-fgh2f [795.442971ms] Mar 16 21:28:20.971: INFO: Created: latency-svc-jp465 Mar 16 21:28:20.974: INFO: Got endpoints: latency-svc-jp465 [818.909501ms] Mar 16 21:28:20.998: INFO: Created: latency-svc-89pvt Mar 16 21:28:21.010: INFO: Got endpoints: latency-svc-89pvt [791.617721ms] Mar 16 21:28:21.028: INFO: Created: latency-svc-44pvs Mar 16 21:28:21.041: INFO: Got endpoints: latency-svc-44pvs [777.778518ms] Mar 16 21:28:21.058: INFO: Created: latency-svc-k7mwx Mar 16 21:28:21.114: INFO: Got endpoints: latency-svc-k7mwx [814.193638ms] Mar 16 21:28:21.116: INFO: Created: latency-svc-q9r76 Mar 16 21:28:21.125: INFO: Got endpoints: latency-svc-q9r76 [762.755593ms] Mar 16 21:28:21.155: INFO: Created: latency-svc-h26jk Mar 16 21:28:21.184: INFO: Got endpoints: latency-svc-h26jk [769.685979ms] Mar 16 21:28:21.264: INFO: Created: latency-svc-svsmx Mar 16 21:28:21.266: INFO: Got endpoints: latency-svc-svsmx [810.082804ms] Mar 16 21:28:21.292: INFO: Created: latency-svc-nzg4x Mar 16 21:28:21.322: INFO: Got endpoints: latency-svc-nzg4x [812.147033ms] Mar 16 21:28:21.346: INFO: Created: latency-svc-2j46n Mar 16 21:28:21.360: INFO: Got endpoints: latency-svc-2j46n [789.797755ms] Mar 16 21:28:21.407: INFO: Created: latency-svc-fht22 Mar 16 21:28:21.411: INFO: Got endpoints: latency-svc-fht22 [746.468895ms] Mar 16 21:28:21.448: INFO: Created: latency-svc-nwvtb Mar 16 21:28:21.457: INFO: Got endpoints: latency-svc-nwvtb [741.662988ms] Mar 16 21:28:21.484: INFO: Created: latency-svc-d9vpj Mar 16 21:28:21.494: INFO: Got endpoints: latency-svc-d9vpj [667.766102ms] Mar 16 21:28:21.539: INFO: Created: latency-svc-d5c7x Mar 16 21:28:21.542: INFO: Got endpoints: latency-svc-d5c7x [700.481257ms] Mar 16 21:28:21.568: INFO: Created: latency-svc-dnwvg Mar 16 21:28:21.584: INFO: Got endpoints: latency-svc-dnwvg [699.869654ms] Mar 16 21:28:21.604: INFO: Created: latency-svc-6bq2f Mar 16 21:28:21.620: INFO: Got endpoints: latency-svc-6bq2f [706.339746ms] Mar 16 21:28:21.671: INFO: Created: latency-svc-9bkxf Mar 16 21:28:21.706: INFO: Got endpoints: latency-svc-9bkxf [732.621591ms] Mar 16 21:28:21.706: INFO: Created: latency-svc-lw7m9 Mar 16 21:28:21.717: INFO: Got endpoints: latency-svc-lw7m9 [706.040751ms] Mar 16 21:28:21.736: INFO: Created: latency-svc-wj9h4 Mar 16 21:28:21.753: INFO: Got endpoints: latency-svc-wj9h4 [712.079947ms] Mar 16 21:28:21.802: INFO: Created: latency-svc-wfwgw Mar 16 21:28:21.826: INFO: Created: latency-svc-v9xhs Mar 16 21:28:21.826: INFO: Got endpoints: latency-svc-wfwgw [712.714972ms] Mar 16 21:28:21.843: INFO: Got endpoints: latency-svc-v9xhs [717.895356ms] Mar 16 21:28:21.886: INFO: Created: latency-svc-dwqn7 Mar 16 21:28:21.898: INFO: Got endpoints: latency-svc-dwqn7 [714.038173ms] Mar 16 21:28:21.952: INFO: Created: latency-svc-bgpwq Mar 16 21:28:21.982: INFO: Created: latency-svc-w76cg Mar 16 21:28:21.982: INFO: Got endpoints: latency-svc-bgpwq [715.904071ms] Mar 16 21:28:21.994: INFO: Got endpoints: latency-svc-w76cg [671.486658ms] Mar 16 21:28:22.012: INFO: Created: latency-svc-whbv7 Mar 16 21:28:22.025: INFO: Got endpoints: latency-svc-whbv7 [664.066017ms] Mar 16 21:28:22.042: INFO: Created: latency-svc-258mf Mar 16 21:28:22.090: INFO: Got endpoints: latency-svc-258mf [678.955164ms] Mar 16 21:28:22.126: INFO: Created: latency-svc-7phbd Mar 16 21:28:22.139: INFO: Got endpoints: latency-svc-7phbd [681.55613ms] Mar 16 21:28:22.174: INFO: Created: latency-svc-kgm22 Mar 16 21:28:22.227: INFO: Got endpoints: latency-svc-kgm22 [733.151563ms] Mar 16 21:28:22.247: INFO: Created: latency-svc-sznp4 Mar 16 21:28:22.259: INFO: Got endpoints: latency-svc-sznp4 [717.328973ms] Mar 16 21:28:22.282: INFO: Created: latency-svc-fcv7v Mar 16 21:28:22.296: INFO: Got endpoints: latency-svc-fcv7v [712.155895ms] Mar 16 21:28:22.317: INFO: Created: latency-svc-lxxmv Mar 16 21:28:22.377: INFO: Got endpoints: latency-svc-lxxmv [756.786168ms] Mar 16 21:28:22.379: INFO: Created: latency-svc-z8xcp Mar 16 21:28:22.381: INFO: Got endpoints: latency-svc-z8xcp [674.21039ms] Mar 16 21:28:22.408: INFO: Created: latency-svc-fddbv Mar 16 21:28:22.422: INFO: Got endpoints: latency-svc-fddbv [705.673472ms] Mar 16 21:28:22.450: INFO: Created: latency-svc-sz4b7 Mar 16 21:28:22.465: INFO: Got endpoints: latency-svc-sz4b7 [711.825759ms] Mar 16 21:28:22.521: INFO: Created: latency-svc-pdfgm Mar 16 21:28:22.523: INFO: Got endpoints: latency-svc-pdfgm [696.876914ms] Mar 16 21:28:22.545: INFO: Created: latency-svc-2hwrj Mar 16 21:28:22.561: INFO: Got endpoints: latency-svc-2hwrj [718.058782ms] Mar 16 21:28:22.582: INFO: Created: latency-svc-6dv8j Mar 16 21:28:22.591: INFO: Got endpoints: latency-svc-6dv8j [693.28637ms] Mar 16 21:28:22.612: INFO: Created: latency-svc-jk8mq Mar 16 21:28:22.652: INFO: Got endpoints: latency-svc-jk8mq [670.057866ms] Mar 16 21:28:22.666: INFO: Created: latency-svc-mppbc Mar 16 21:28:22.696: INFO: Got endpoints: latency-svc-mppbc [701.899694ms] Mar 16 21:28:22.744: INFO: Created: latency-svc-zcsbf Mar 16 21:28:22.802: INFO: Got endpoints: latency-svc-zcsbf [777.72703ms] Mar 16 21:28:22.805: INFO: Created: latency-svc-9lhgs Mar 16 21:28:22.808: INFO: Got endpoints: latency-svc-9lhgs [718.904885ms] Mar 16 21:28:22.827: INFO: Created: latency-svc-5pw5g Mar 16 21:28:22.845: INFO: Got endpoints: latency-svc-5pw5g [706.810079ms] Mar 16 21:28:22.864: INFO: Created: latency-svc-s7z5h Mar 16 21:28:22.881: INFO: Got endpoints: latency-svc-s7z5h [654.223262ms] Mar 16 21:28:22.900: INFO: Created: latency-svc-n557s Mar 16 21:28:22.952: INFO: Got endpoints: latency-svc-n557s [692.67975ms] Mar 16 21:28:22.971: INFO: Created: latency-svc-kq6dz Mar 16 21:28:22.984: INFO: Got endpoints: latency-svc-kq6dz [687.806008ms] Mar 16 21:28:23.001: INFO: Created: latency-svc-9dkmf Mar 16 21:28:23.025: INFO: Got endpoints: latency-svc-9dkmf [648.26479ms] Mar 16 21:28:23.049: INFO: Created: latency-svc-mqlcm Mar 16 21:28:23.114: INFO: Got endpoints: latency-svc-mqlcm [733.110734ms] Mar 16 21:28:23.129: INFO: Created: latency-svc-gkwfb Mar 16 21:28:23.146: INFO: Got endpoints: latency-svc-gkwfb [723.383919ms] Mar 16 21:28:23.146: INFO: Created: latency-svc-tldnz Mar 16 21:28:23.159: INFO: Got endpoints: latency-svc-tldnz [694.015163ms] Mar 16 21:28:23.175: INFO: Created: latency-svc-fsk8t Mar 16 21:28:23.189: INFO: Got endpoints: latency-svc-fsk8t [666.065012ms] Mar 16 21:28:23.211: INFO: Created: latency-svc-t5qwp Mar 16 21:28:23.276: INFO: Got endpoints: latency-svc-t5qwp [714.105817ms] Mar 16 21:28:23.278: INFO: Created: latency-svc-cdfs2 Mar 16 21:28:23.286: INFO: Got endpoints: latency-svc-cdfs2 [694.315702ms] Mar 16 21:28:23.308: INFO: Created: latency-svc-jkk5x Mar 16 21:28:23.322: INFO: Got endpoints: latency-svc-jkk5x [669.519156ms] Mar 16 21:28:23.343: INFO: Created: latency-svc-rsmbp Mar 16 21:28:23.352: INFO: Got endpoints: latency-svc-rsmbp [655.888268ms] Mar 16 21:28:23.373: INFO: Created: latency-svc-sslkp Mar 16 21:28:23.425: INFO: Got endpoints: latency-svc-sslkp [622.428683ms] Mar 16 21:28:23.439: INFO: Created: latency-svc-wbm8g Mar 16 21:28:23.470: INFO: Got endpoints: latency-svc-wbm8g [661.120284ms] Mar 16 21:28:23.506: INFO: Created: latency-svc-5sgvd Mar 16 21:28:23.521: INFO: Got endpoints: latency-svc-5sgvd [675.705704ms] Mar 16 21:28:23.569: INFO: Created: latency-svc-248tb Mar 16 21:28:23.575: INFO: Got endpoints: latency-svc-248tb [693.83186ms] Mar 16 21:28:23.595: INFO: Created: latency-svc-nb66r Mar 16 21:28:23.606: INFO: Got endpoints: latency-svc-nb66r [653.492832ms] Mar 16 21:28:23.625: INFO: Created: latency-svc-s4cs8 Mar 16 21:28:23.642: INFO: Got endpoints: latency-svc-s4cs8 [658.187711ms] Mar 16 21:28:23.712: INFO: Created: latency-svc-xlf4w Mar 16 21:28:23.715: INFO: Got endpoints: latency-svc-xlf4w [689.309329ms] Mar 16 21:28:23.770: INFO: Created: latency-svc-7q4dx Mar 16 21:28:23.794: INFO: Got endpoints: latency-svc-7q4dx [679.703754ms] Mar 16 21:28:23.850: INFO: Created: latency-svc-gxrr8 Mar 16 21:28:23.853: INFO: Got endpoints: latency-svc-gxrr8 [706.683408ms] Mar 16 21:28:24.115: INFO: Created: latency-svc-lfxm2 Mar 16 21:28:24.381: INFO: Got endpoints: latency-svc-lfxm2 [1.222379471s] Mar 16 21:28:24.382: INFO: Created: latency-svc-rzv9w Mar 16 21:28:24.390: INFO: Got endpoints: latency-svc-rzv9w [1.200984501s] Mar 16 21:28:24.441: INFO: Created: latency-svc-f8dkz Mar 16 21:28:24.448: INFO: Got endpoints: latency-svc-f8dkz [1.172087064s] Mar 16 21:28:24.465: INFO: Created: latency-svc-xgdn8 Mar 16 21:28:24.510: INFO: Created: latency-svc-ptwqc Mar 16 21:28:24.519: INFO: Got endpoints: latency-svc-xgdn8 [1.233191463s] Mar 16 21:28:24.520: INFO: Got endpoints: latency-svc-ptwqc [1.197619996s] Mar 16 21:28:24.585: INFO: Created: latency-svc-p9c47 Mar 16 21:28:24.597: INFO: Got endpoints: latency-svc-p9c47 [1.245550967s] Mar 16 21:28:24.640: INFO: Created: latency-svc-6tgsd Mar 16 21:28:24.652: INFO: Got endpoints: latency-svc-6tgsd [1.226819251s] Mar 16 21:28:24.669: INFO: Created: latency-svc-22svk Mar 16 21:28:24.688: INFO: Got endpoints: latency-svc-22svk [1.217908559s] Mar 16 21:28:24.723: INFO: Created: latency-svc-mclf7 Mar 16 21:28:24.808: INFO: Got endpoints: latency-svc-mclf7 [1.28662507s] Mar 16 21:28:24.813: INFO: Created: latency-svc-wvhk7 Mar 16 21:28:24.843: INFO: Got endpoints: latency-svc-wvhk7 [1.267943238s] Mar 16 21:28:24.885: INFO: Created: latency-svc-2n72g Mar 16 21:28:24.899: INFO: Got endpoints: latency-svc-2n72g [1.293144405s] Mar 16 21:28:24.940: INFO: Created: latency-svc-46wp2 Mar 16 21:28:24.953: INFO: Got endpoints: latency-svc-46wp2 [1.310747395s] Mar 16 21:28:24.975: INFO: Created: latency-svc-ldjdd Mar 16 21:28:25.004: INFO: Got endpoints: latency-svc-ldjdd [1.289438368s] Mar 16 21:28:25.035: INFO: Created: latency-svc-dc9wm Mar 16 21:28:25.096: INFO: Got endpoints: latency-svc-dc9wm [1.301820587s] Mar 16 21:28:25.098: INFO: Created: latency-svc-n5nqf Mar 16 21:28:25.103: INFO: Got endpoints: latency-svc-n5nqf [1.250600136s] Mar 16 21:28:25.124: INFO: Created: latency-svc-f99b9 Mar 16 21:28:25.134: INFO: Got endpoints: latency-svc-f99b9 [752.616157ms] Mar 16 21:28:25.155: INFO: Created: latency-svc-qhtnc Mar 16 21:28:25.164: INFO: Got endpoints: latency-svc-qhtnc [773.900334ms] Mar 16 21:28:25.190: INFO: Created: latency-svc-rrnz6 Mar 16 21:28:25.234: INFO: Got endpoints: latency-svc-rrnz6 [785.814713ms] Mar 16 21:28:25.244: INFO: Created: latency-svc-mf82s Mar 16 21:28:25.261: INFO: Got endpoints: latency-svc-mf82s [742.049752ms] Mar 16 21:28:25.281: INFO: Created: latency-svc-bnr2d Mar 16 21:28:25.298: INFO: Got endpoints: latency-svc-bnr2d [778.140516ms] Mar 16 21:28:25.316: INFO: Created: latency-svc-vxwz5 Mar 16 21:28:25.377: INFO: Got endpoints: latency-svc-vxwz5 [779.87964ms] Mar 16 21:28:25.379: INFO: Created: latency-svc-wg4gr Mar 16 21:28:25.393: INFO: Got endpoints: latency-svc-wg4gr [741.512383ms] Mar 16 21:28:25.412: INFO: Created: latency-svc-g5xk4 Mar 16 21:28:25.436: INFO: Got endpoints: latency-svc-g5xk4 [748.783347ms] Mar 16 21:28:25.466: INFO: Created: latency-svc-cdgjj Mar 16 21:28:25.527: INFO: Got endpoints: latency-svc-cdgjj [718.646213ms] Mar 16 21:28:25.529: INFO: Created: latency-svc-hpzmt Mar 16 21:28:25.539: INFO: Got endpoints: latency-svc-hpzmt [695.324403ms] Mar 16 21:28:25.564: INFO: Created: latency-svc-9x945 Mar 16 21:28:25.574: INFO: Got endpoints: latency-svc-9x945 [675.489573ms] Mar 16 21:28:25.592: INFO: Created: latency-svc-j6glv Mar 16 21:28:25.605: INFO: Got endpoints: latency-svc-j6glv [652.153055ms] Mar 16 21:28:25.605: INFO: Latencies: [86.292733ms 130.049868ms 167.503352ms 220.015951ms 262.489426ms 304.802268ms 360.006643ms 412.971015ms 449.205424ms 518.06686ms 558.094946ms 622.428683ms 648.26479ms 652.153055ms 653.492832ms 654.223262ms 655.888268ms 658.187711ms 661.120284ms 664.066017ms 666.065012ms 667.766102ms 669.519156ms 670.057866ms 671.459785ms 671.486658ms 674.21039ms 675.489573ms 675.705704ms 678.955164ms 679.703754ms 681.55613ms 687.806008ms 689.309329ms 692.67975ms 693.28637ms 693.83186ms 694.015163ms 694.315702ms 695.324403ms 696.876914ms 699.869654ms 700.481257ms 701.899694ms 705.673472ms 706.040751ms 706.339746ms 706.683408ms 706.810079ms 711.825759ms 712.079947ms 712.155895ms 712.714972ms 714.038173ms 714.105817ms 715.561158ms 715.904071ms 717.328973ms 717.895356ms 718.058782ms 718.646213ms 718.904885ms 719.255574ms 723.383919ms 727.658621ms 732.621591ms 733.110734ms 733.151563ms 735.323657ms 735.996065ms 736.541318ms 737.325146ms 739.00154ms 741.512383ms 741.662988ms 742.049752ms 743.622837ms 746.468895ms 748.783347ms 749.734921ms 752.616157ms 752.812062ms 755.843874ms 755.929676ms 756.60818ms 756.786168ms 756.911976ms 762.755593ms 769.685979ms 772.39225ms 773.900334ms 776.073363ms 776.688411ms 777.251795ms 777.342128ms 777.72703ms 777.778518ms 778.140516ms 779.87964ms 782.129516ms 785.814713ms 787.220859ms 788.239656ms 789.508324ms 789.797755ms 790.438937ms 791.617721ms 794.879847ms 795.240389ms 795.442971ms 799.237384ms 801.59625ms 805.544133ms 807.085222ms 808.580527ms 808.943923ms 810.082804ms 810.911168ms 812.147033ms 812.71124ms 813.37909ms 813.669909ms 814.193638ms 814.19424ms 814.622249ms 816.863759ms 817.068063ms 818.909501ms 819.869531ms 821.199614ms 822.170856ms 822.517066ms 829.557426ms 831.704653ms 832.960536ms 835.494359ms 836.31577ms 836.364561ms 837.723255ms 837.950459ms 842.790681ms 849.266336ms 849.485836ms 853.028618ms 853.515863ms 854.902866ms 855.327982ms 856.186998ms 856.457948ms 858.464093ms 862.072139ms 873.305684ms 887.679786ms 889.354065ms 891.021948ms 896.260406ms 902.098215ms 904.990505ms 916.662942ms 927.440874ms 932.306791ms 940.06574ms 948.610281ms 951.241762ms 952.440773ms 957.562618ms 963.624653ms 969.410258ms 969.99932ms 1.154924645s 1.172087064s 1.197619996s 1.200984501s 1.217908559s 1.222379471s 1.226819251s 1.233191463s 1.245550967s 1.250600136s 1.267943238s 1.277922296s 1.28662507s 1.289438368s 1.293144405s 1.301820587s 1.310747395s 1.509003592s 1.51398987s 1.540588918s 1.567710267s 1.58234246s 1.583737212s 1.585843374s 1.592064475s 1.597369166s 1.604173926s 1.60467996s 1.610882593s 1.622857555s 1.62915053s] Mar 16 21:28:25.605: INFO: 50 %ile: 785.814713ms Mar 16 21:28:25.605: INFO: 90 %ile: 1.277922296s Mar 16 21:28:25.605: INFO: 99 %ile: 1.622857555s Mar 16 21:28:25.605: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:28:25.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3695" for this suite. • [SLOW TEST:15.891 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":76,"skipped":1413,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:28:25.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 16 21:28:25.695: INFO: Waiting up to 5m0s for pod "pod-60294a70-ec86-45d8-b558-8fbfcf4454c3" in namespace "emptydir-8952" to be "success or failure" Mar 16 21:28:25.742: INFO: Pod "pod-60294a70-ec86-45d8-b558-8fbfcf4454c3": Phase="Pending", Reason="", readiness=false. Elapsed: 46.440785ms Mar 16 21:28:27.745: INFO: Pod "pod-60294a70-ec86-45d8-b558-8fbfcf4454c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05000942s Mar 16 21:28:29.750: INFO: Pod "pod-60294a70-ec86-45d8-b558-8fbfcf4454c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054505039s STEP: Saw pod success Mar 16 21:28:29.750: INFO: Pod "pod-60294a70-ec86-45d8-b558-8fbfcf4454c3" satisfied condition "success or failure" Mar 16 21:28:29.753: INFO: Trying to get logs from node jerma-worker pod pod-60294a70-ec86-45d8-b558-8fbfcf4454c3 container test-container: STEP: delete the pod Mar 16 21:28:29.786: INFO: Waiting for pod pod-60294a70-ec86-45d8-b558-8fbfcf4454c3 to disappear Mar 16 21:28:29.796: INFO: Pod pod-60294a70-ec86-45d8-b558-8fbfcf4454c3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:28:29.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8952" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1415,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:28:29.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 16 21:28:34.620: INFO: Successfully updated pod "pod-update-155d34eb-4ec4-435c-b56e-ca036481ab03" STEP: verifying the updated pod is in kubernetes Mar 16 21:28:34.692: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:28:34.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3158" for this suite. • [SLOW TEST:5.008 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1423,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:28:34.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:28:34.962: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:28:39.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7484" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1507,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:28:39.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 16 21:28:39.376: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-a 3243f418-2974-4488-8ffb-66057f45224d 319851 0 2020-03-16 21:28:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 16 21:28:39.376: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-a 3243f418-2974-4488-8ffb-66057f45224d 319851 0 2020-03-16 21:28:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 16 21:28:49.429: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-a 3243f418-2974-4488-8ffb-66057f45224d 320262 0 2020-03-16 21:28:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 16 21:28:49.429: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-a 3243f418-2974-4488-8ffb-66057f45224d 320262 0 2020-03-16 21:28:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 16 21:28:59.435: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-a 3243f418-2974-4488-8ffb-66057f45224d 320350 0 2020-03-16 21:28:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 16 21:28:59.436: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-a 3243f418-2974-4488-8ffb-66057f45224d 320350 0 2020-03-16 21:28:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 16 21:29:09.442: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-a 3243f418-2974-4488-8ffb-66057f45224d 320386 0 2020-03-16 21:28:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 16 21:29:09.442: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-a 3243f418-2974-4488-8ffb-66057f45224d 320386 0 2020-03-16 21:28:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 16 21:29:19.449: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-b ed078b2f-bd0b-4024-9fe1-113739a5bdf4 320419 0 2020-03-16 21:29:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 16 21:29:19.450: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-b ed078b2f-bd0b-4024-9fe1-113739a5bdf4 320419 0 2020-03-16 21:29:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 16 21:29:29.457: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-b ed078b2f-bd0b-4024-9fe1-113739a5bdf4 320451 0 2020-03-16 21:29:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 16 21:29:29.457: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8404 /api/v1/namespaces/watch-8404/configmaps/e2e-watch-test-configmap-b ed078b2f-bd0b-4024-9fe1-113739a5bdf4 320451 0 2020-03-16 21:29:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:29:39.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8404" for this suite. • [SLOW TEST:60.289 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":80,"skipped":1525,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:29:39.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-0f138d4c-53d9-4133-abd2-1864e6e86b59 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0f138d4c-53d9-4133-abd2-1864e6e86b59 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:31:07.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6431" for this suite. • [SLOW TEST:88.524 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1529,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:31:07.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-337e10e8-e0fe-44ef-9b7b-141d2018351b in namespace container-probe-6079 Mar 16 21:31:12.140: INFO: Started pod busybox-337e10e8-e0fe-44ef-9b7b-141d2018351b in namespace container-probe-6079 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 21:31:12.142: INFO: Initial restart count of pod busybox-337e10e8-e0fe-44ef-9b7b-141d2018351b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:35:12.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6079" for this suite. • [SLOW TEST:244.827 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1538,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:35:12.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 16 21:35:17.436: INFO: Successfully updated pod "annotationupdatedb10124a-e32b-4b9f-8aff-8fec678b38ab" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:35:19.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-713" for this suite. • [SLOW TEST:6.639 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1570,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:35:19.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 16 21:35:20.049: INFO: Pod name wrapped-volume-race-f424314f-03eb-4877-b815-f999b65c3876: Found 0 pods out of 5 Mar 16 21:35:25.248: INFO: Pod name wrapped-volume-race-f424314f-03eb-4877-b815-f999b65c3876: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f424314f-03eb-4877-b815-f999b65c3876 in namespace emptydir-wrapper-5003, will wait for the garbage collector to delete the pods Mar 16 21:35:37.359: INFO: Deleting ReplicationController wrapped-volume-race-f424314f-03eb-4877-b815-f999b65c3876 took: 6.637898ms Mar 16 21:35:37.660: INFO: Terminating ReplicationController wrapped-volume-race-f424314f-03eb-4877-b815-f999b65c3876 pods took: 300.251056ms STEP: Creating RC which spawns configmap-volume pods Mar 16 21:35:50.620: INFO: Pod name wrapped-volume-race-2d5107d4-2594-4075-8eb7-97529c034c06: Found 0 pods out of 5 Mar 16 21:35:55.628: INFO: Pod name wrapped-volume-race-2d5107d4-2594-4075-8eb7-97529c034c06: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2d5107d4-2594-4075-8eb7-97529c034c06 in namespace emptydir-wrapper-5003, will wait for the garbage collector to delete the pods Mar 16 21:36:09.713: INFO: Deleting ReplicationController wrapped-volume-race-2d5107d4-2594-4075-8eb7-97529c034c06 took: 7.037314ms Mar 16 21:36:10.013: INFO: Terminating ReplicationController wrapped-volume-race-2d5107d4-2594-4075-8eb7-97529c034c06 pods took: 300.462473ms STEP: Creating RC which spawns configmap-volume pods Mar 16 21:36:16.648: INFO: Pod name wrapped-volume-race-fd31e2cf-8df5-4477-8b72-1d985bd6ae62: Found 0 pods out of 5 Mar 16 21:36:21.658: INFO: Pod name wrapped-volume-race-fd31e2cf-8df5-4477-8b72-1d985bd6ae62: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fd31e2cf-8df5-4477-8b72-1d985bd6ae62 in namespace emptydir-wrapper-5003, will wait for the garbage collector to delete the pods Mar 16 21:36:34.049: INFO: Deleting ReplicationController wrapped-volume-race-fd31e2cf-8df5-4477-8b72-1d985bd6ae62 took: 312.047551ms Mar 16 21:36:34.849: INFO: Terminating ReplicationController wrapped-volume-race-fd31e2cf-8df5-4477-8b72-1d985bd6ae62 pods took: 800.312222ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:36:50.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5003" for this suite. • [SLOW TEST:91.327 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":84,"skipped":1639,"failed":0} S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:36:50.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:36:50.833: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 16 21:36:50.853: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 16 21:36:55.856: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 16 21:36:55.856: INFO: Creating deployment "test-rolling-update-deployment" Mar 16 21:36:55.859: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 16 21:36:55.865: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 16 21:36:57.908: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 16 21:36:57.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991415, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991415, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991415, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991415, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 21:37:00.115: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 16 21:37:00.135: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6564 /apis/apps/v1/namespaces/deployment-6564/deployments/test-rolling-update-deployment 95d21405-40ef-4f14-890e-044fe22b85f5 322677 1 2020-03-16 21:36:55 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c12628 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-16 21:36:55 +0000 UTC,LastTransitionTime:2020-03-16 21:36:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-16 21:36:59 +0000 UTC,LastTransitionTime:2020-03-16 21:36:55 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 16 21:37:00.517: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-6564 /apis/apps/v1/namespaces/deployment-6564/replicasets/test-rolling-update-deployment-67cf4f6444 3c27e82a-1627-4ac3-b83d-fa7c6646052f 322663 1 2020-03-16 21:36:55 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 95d21405-40ef-4f14-890e-044fe22b85f5 0xc002c12d77 0xc002c12d78}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c12e58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 16 21:37:00.517: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 16 21:37:00.517: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6564 /apis/apps/v1/namespaces/deployment-6564/replicasets/test-rolling-update-controller 324b0690-9bd2-4f40-83bd-05886053cba0 322675 2 2020-03-16 21:36:50 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 95d21405-40ef-4f14-890e-044fe22b85f5 0xc002c12c07 0xc002c12c08}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c12cc8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 21:37:00.530: INFO: Pod "test-rolling-update-deployment-67cf4f6444-2cfnq" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-2cfnq test-rolling-update-deployment-67cf4f6444- deployment-6564 /api/v1/namespaces/deployment-6564/pods/test-rolling-update-deployment-67cf4f6444-2cfnq c4a01e63-2975-4954-8b84-12c7252a40e6 322662 0 2020-03-16 21:36:55 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 3c27e82a-1627-4ac3-b83d-fa7c6646052f 0xc002c13517 0xc002c13518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9hqwq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9hqwq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9hqwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:36:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:36:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:36:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:36:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.68,StartTime:2020-03-16 21:36:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 21:36:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://b71aee4a2254fca0c3c53356d53e7896ec596d9a4163b71ac9894fc551b3e61a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:37:00.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6564" for this suite. • [SLOW TEST:9.764 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":85,"skipped":1640,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:37:00.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-9de84b0e-cf63-46d5-ba85-c49ac98b86fb STEP: Creating a pod to test consume configMaps Mar 16 21:37:00.714: INFO: Waiting up to 5m0s for pod "pod-configmaps-b37cb136-1147-4755-8193-f5f8a46c4f4a" in namespace "configmap-1968" to be "success or failure" Mar 16 21:37:00.727: INFO: Pod "pod-configmaps-b37cb136-1147-4755-8193-f5f8a46c4f4a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.822331ms Mar 16 21:37:02.737: INFO: Pod "pod-configmaps-b37cb136-1147-4755-8193-f5f8a46c4f4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023642044s Mar 16 21:37:04.741: INFO: Pod "pod-configmaps-b37cb136-1147-4755-8193-f5f8a46c4f4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027016433s STEP: Saw pod success Mar 16 21:37:04.741: INFO: Pod "pod-configmaps-b37cb136-1147-4755-8193-f5f8a46c4f4a" satisfied condition "success or failure" Mar 16 21:37:04.744: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b37cb136-1147-4755-8193-f5f8a46c4f4a container configmap-volume-test: STEP: delete the pod Mar 16 21:37:04.776: INFO: Waiting for pod pod-configmaps-b37cb136-1147-4755-8193-f5f8a46c4f4a to disappear Mar 16 21:37:04.827: INFO: Pod pod-configmaps-b37cb136-1147-4755-8193-f5f8a46c4f4a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:37:04.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1968" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1659,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:37:04.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 16 21:37:04.860: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 16 21:37:04.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2792' Mar 16 21:37:08.059: INFO: stderr: "" Mar 16 21:37:08.059: INFO: stdout: "service/agnhost-slave created\n" Mar 16 21:37:08.060: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 16 21:37:08.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2792' Mar 16 21:37:08.308: INFO: stderr: "" Mar 16 21:37:08.308: INFO: stdout: "service/agnhost-master created\n" Mar 16 21:37:08.308: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 16 21:37:08.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2792' Mar 16 21:37:08.580: INFO: stderr: "" Mar 16 21:37:08.580: INFO: stdout: "service/frontend created\n" Mar 16 21:37:08.581: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 16 21:37:08.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2792' Mar 16 21:37:08.818: INFO: stderr: "" Mar 16 21:37:08.818: INFO: stdout: "deployment.apps/frontend created\n" Mar 16 21:37:08.818: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 16 21:37:08.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2792' Mar 16 21:37:09.079: INFO: stderr: "" Mar 16 21:37:09.079: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 16 21:37:09.079: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 16 21:37:09.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2792' Mar 16 21:37:09.312: INFO: stderr: "" Mar 16 21:37:09.312: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 16 21:37:09.312: INFO: Waiting for all frontend pods to be Running. Mar 16 21:37:19.363: INFO: Waiting for frontend to serve content. Mar 16 21:37:19.375: INFO: Trying to add a new entry to the guestbook. Mar 16 21:37:19.385: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 16 21:37:19.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2792' Mar 16 21:37:19.545: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 21:37:19.545: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 16 21:37:19.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2792' Mar 16 21:37:19.671: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 21:37:19.672: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 16 21:37:19.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2792' Mar 16 21:37:19.814: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 21:37:19.814: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 16 21:37:19.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2792' Mar 16 21:37:19.913: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 21:37:19.913: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 16 21:37:19.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2792' Mar 16 21:37:20.002: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 21:37:20.002: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 16 21:37:20.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2792' Mar 16 21:37:20.244: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 21:37:20.244: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:37:20.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2792" for this suite. • [SLOW TEST:15.547 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:386 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":87,"skipped":1708,"failed":0} [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:37:20.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 16 21:37:28.117: INFO: 2 pods remaining Mar 16 21:37:28.117: INFO: 0 pods has nil DeletionTimestamp Mar 16 21:37:28.117: INFO: Mar 16 21:37:29.589: INFO: 0 pods remaining Mar 16 21:37:29.589: INFO: 0 pods has nil DeletionTimestamp Mar 16 21:37:29.589: INFO: STEP: Gathering metrics W0316 21:37:30.109399 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 21:37:30.109: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:37:30.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9056" for this suite. • [SLOW TEST:9.814 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":88,"skipped":1708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:37:30.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 16 21:37:30.529: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 16 21:37:41.399: INFO: >>> kubeConfig: /root/.kube/config Mar 16 21:37:44.263: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:37:54.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3092" for this suite. • [SLOW TEST:24.429 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":89,"skipped":1763,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:37:54.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0316 21:38:04.757759 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 21:38:04.757: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:38:04.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1250" for this suite. • [SLOW TEST:10.138 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":90,"skipped":1780,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:38:04.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-aca2b56d-a94d-459e-a244-fa9171025913 STEP: Creating a pod to test consume configMaps Mar 16 21:38:04.877: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f2171de-e59c-4a04-a812-d8c87b3eadf3" in namespace "configmap-5172" to be "success or failure" Mar 16 21:38:04.880: INFO: Pod "pod-configmaps-7f2171de-e59c-4a04-a812-d8c87b3eadf3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.054533ms Mar 16 21:38:06.884: INFO: Pod "pod-configmaps-7f2171de-e59c-4a04-a812-d8c87b3eadf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007163378s Mar 16 21:38:08.888: INFO: Pod "pod-configmaps-7f2171de-e59c-4a04-a812-d8c87b3eadf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011524766s STEP: Saw pod success Mar 16 21:38:08.888: INFO: Pod "pod-configmaps-7f2171de-e59c-4a04-a812-d8c87b3eadf3" satisfied condition "success or failure" Mar 16 21:38:08.892: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-7f2171de-e59c-4a04-a812-d8c87b3eadf3 container configmap-volume-test: STEP: delete the pod Mar 16 21:38:08.924: INFO: Waiting for pod pod-configmaps-7f2171de-e59c-4a04-a812-d8c87b3eadf3 to disappear Mar 16 21:38:08.929: INFO: Pod pod-configmaps-7f2171de-e59c-4a04-a812-d8c87b3eadf3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:38:08.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5172" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1802,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:38:08.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:38:08.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 16 21:38:09.126: INFO: stderr: "" Mar 16 21:38:09.126: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-03-12T20:50:47Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:38:09.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5627" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":92,"skipped":1822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:38:09.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 21:38:09.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d615e86-101b-4123-852e-f6dadab2252b" in namespace "downward-api-5319" to be "success or failure" Mar 16 21:38:09.245: INFO: Pod "downwardapi-volume-7d615e86-101b-4123-852e-f6dadab2252b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.747492ms Mar 16 21:38:11.248: INFO: Pod "downwardapi-volume-7d615e86-101b-4123-852e-f6dadab2252b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018544054s Mar 16 21:38:13.253: INFO: Pod "downwardapi-volume-7d615e86-101b-4123-852e-f6dadab2252b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023217512s STEP: Saw pod success Mar 16 21:38:13.253: INFO: Pod "downwardapi-volume-7d615e86-101b-4123-852e-f6dadab2252b" satisfied condition "success or failure" Mar 16 21:38:13.256: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7d615e86-101b-4123-852e-f6dadab2252b container client-container: STEP: delete the pod Mar 16 21:38:13.290: INFO: Waiting for pod downwardapi-volume-7d615e86-101b-4123-852e-f6dadab2252b to disappear Mar 16 21:38:13.310: INFO: Pod downwardapi-volume-7d615e86-101b-4123-852e-f6dadab2252b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:38:13.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5319" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1845,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:38:13.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 16 21:38:13.427: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:13.432: INFO: Number of nodes with available pods: 0 Mar 16 21:38:13.432: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:38:14.437: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:14.440: INFO: Number of nodes with available pods: 0 Mar 16 21:38:14.440: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:38:15.437: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:15.440: INFO: Number of nodes with available pods: 0 Mar 16 21:38:15.440: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:38:16.450: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:16.454: INFO: Number of nodes with available pods: 0 Mar 16 21:38:16.454: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:38:17.437: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:17.440: INFO: Number of nodes with available pods: 2 Mar 16 21:38:17.440: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 16 21:38:17.456: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:17.459: INFO: Number of nodes with available pods: 1 Mar 16 21:38:17.459: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:38:18.487: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:18.498: INFO: Number of nodes with available pods: 1 Mar 16 21:38:18.498: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:38:19.464: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:19.466: INFO: Number of nodes with available pods: 1 Mar 16 21:38:19.466: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:38:20.463: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:20.466: INFO: Number of nodes with available pods: 1 Mar 16 21:38:20.466: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:38:21.463: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:21.466: INFO: Number of nodes with available pods: 1 Mar 16 21:38:21.466: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:38:22.463: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:22.466: INFO: Number of nodes with available pods: 1 Mar 16 21:38:22.466: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:38:23.464: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:23.467: INFO: Number of nodes with available pods: 1 Mar 16 21:38:23.467: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:38:24.463: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 21:38:24.466: INFO: Number of nodes with available pods: 2 Mar 16 21:38:24.466: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7759, will wait for the garbage collector to delete the pods Mar 16 21:38:24.528: INFO: Deleting DaemonSet.extensions daemon-set took: 5.209247ms Mar 16 21:38:24.628: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.263888ms Mar 16 21:38:39.230: INFO: Number of nodes with available pods: 0 Mar 16 21:38:39.230: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 21:38:39.233: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7759/daemonsets","resourceVersion":"323621"},"items":null} Mar 16 21:38:39.235: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7759/pods","resourceVersion":"323621"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:38:39.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7759" for this suite. • [SLOW TEST:25.935 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":94,"skipped":1858,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:38:39.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 16 21:38:40.038: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 16 21:38:42.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991520, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991520, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991520, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991520, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:38:45.083: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:38:45.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:38:46.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6684" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.170 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":95,"skipped":1885,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:38:46.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 21:38:49.538: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:38:49.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8812" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1885,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:38:49.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 16 21:38:49.849: INFO: Waiting up to 5m0s for pod "client-containers-2fe99ca6-2db8-430e-b991-4849d24a642c" in namespace "containers-5930" to be "success or failure" Mar 16 21:38:49.852: INFO: Pod "client-containers-2fe99ca6-2db8-430e-b991-4849d24a642c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.639895ms Mar 16 21:38:51.870: INFO: Pod "client-containers-2fe99ca6-2db8-430e-b991-4849d24a642c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021117669s Mar 16 21:38:53.894: INFO: Pod "client-containers-2fe99ca6-2db8-430e-b991-4849d24a642c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045248948s STEP: Saw pod success Mar 16 21:38:53.894: INFO: Pod "client-containers-2fe99ca6-2db8-430e-b991-4849d24a642c" satisfied condition "success or failure" Mar 16 21:38:53.897: INFO: Trying to get logs from node jerma-worker2 pod client-containers-2fe99ca6-2db8-430e-b991-4849d24a642c container test-container: STEP: delete the pod Mar 16 21:38:53.950: INFO: Waiting for pod client-containers-2fe99ca6-2db8-430e-b991-4849d24a642c to disappear Mar 16 21:38:53.966: INFO: Pod client-containers-2fe99ca6-2db8-430e-b991-4849d24a642c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:38:53.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5930" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1893,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:38:53.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:38:54.045: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-2a874b76-d3b6-4fbf-bf2b-c4ec06710ae2" in namespace "security-context-test-1087" to be "success or failure" Mar 16 21:38:54.050: INFO: Pod "busybox-privileged-false-2a874b76-d3b6-4fbf-bf2b-c4ec06710ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.975721ms Mar 16 21:38:56.054: INFO: Pod "busybox-privileged-false-2a874b76-d3b6-4fbf-bf2b-c4ec06710ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009010632s Mar 16 21:38:58.058: INFO: Pod "busybox-privileged-false-2a874b76-d3b6-4fbf-bf2b-c4ec06710ae2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013125245s Mar 16 21:38:58.058: INFO: Pod "busybox-privileged-false-2a874b76-d3b6-4fbf-bf2b-c4ec06710ae2" satisfied condition "success or failure" Mar 16 21:38:58.065: INFO: Got logs for pod "busybox-privileged-false-2a874b76-d3b6-4fbf-bf2b-c4ec06710ae2": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:38:58.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1087" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1894,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:38:58.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 16 21:38:58.130: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:39:09.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6563" for this suite. • [SLOW TEST:11.429 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1898,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:39:09.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 21:39:09.975: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 21:39:12.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991549, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991549, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991550, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991549, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 21:39:14.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991549, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991549, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991550, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991549, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:39:17.195: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:39:17.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8973" for this suite. STEP: Destroying namespace "webhook-8973-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.998 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":100,"skipped":1918,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:39:17.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 21:39:18.414: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 21:39:20.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991558, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991558, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991558, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991558, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:39:23.584: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:39:35.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3276" for this suite. STEP: Destroying namespace "webhook-3276-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.359 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":101,"skipped":1953,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:39:35.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 21:39:35.957: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68edff19-c771-43e0-a76c-c0765fbf860e" in namespace "projected-7010" to be "success or failure" Mar 16 21:39:35.984: INFO: Pod "downwardapi-volume-68edff19-c771-43e0-a76c-c0765fbf860e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.939024ms Mar 16 21:39:37.988: INFO: Pod "downwardapi-volume-68edff19-c771-43e0-a76c-c0765fbf860e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030942317s Mar 16 21:39:39.992: INFO: Pod "downwardapi-volume-68edff19-c771-43e0-a76c-c0765fbf860e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034989707s STEP: Saw pod success Mar 16 21:39:39.992: INFO: Pod "downwardapi-volume-68edff19-c771-43e0-a76c-c0765fbf860e" satisfied condition "success or failure" Mar 16 21:39:39.995: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-68edff19-c771-43e0-a76c-c0765fbf860e container client-container: STEP: delete the pod Mar 16 21:39:40.026: INFO: Waiting for pod downwardapi-volume-68edff19-c771-43e0-a76c-c0765fbf860e to disappear Mar 16 21:39:40.051: INFO: Pod downwardapi-volume-68edff19-c771-43e0-a76c-c0765fbf860e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:39:40.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7010" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:39:40.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 16 21:39:44.638: INFO: Successfully updated pod "labelsupdate6b458db6-1d3e-4099-8994-2bae064d78e3" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:39:46.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3366" for this suite. • [SLOW TEST:6.631 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":2042,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:39:46.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2731.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2731.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2731.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2731.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2731.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2731.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 21:39:52.866: INFO: DNS probes using dns-2731/dns-test-72e67dfb-fc93-4956-b659-aad0bde75ce5 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:39:53.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2731" for this suite. • [SLOW TEST:6.393 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":104,"skipped":2058,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:39:53.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3007 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 21:39:53.317: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 16 21:40:19.509: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.50:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3007 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:40:19.509: INFO: >>> kubeConfig: /root/.kube/config I0316 21:40:19.543959 6 log.go:172] (0xc002ad0160) (0xc000b08280) Create stream I0316 21:40:19.543997 6 log.go:172] (0xc002ad0160) (0xc000b08280) Stream added, broadcasting: 1 I0316 21:40:19.546903 6 log.go:172] (0xc002ad0160) Reply frame received for 1 I0316 21:40:19.546956 6 log.go:172] (0xc002ad0160) (0xc001ad8d20) Create stream I0316 21:40:19.546973 6 log.go:172] (0xc002ad0160) (0xc001ad8d20) Stream added, broadcasting: 3 I0316 21:40:19.547972 6 log.go:172] (0xc002ad0160) Reply frame received for 3 I0316 21:40:19.548006 6 log.go:172] (0xc002ad0160) (0xc000b08640) Create stream I0316 21:40:19.548020 6 log.go:172] (0xc002ad0160) (0xc000b08640) Stream added, broadcasting: 5 I0316 21:40:19.549079 6 log.go:172] (0xc002ad0160) Reply frame received for 5 I0316 21:40:19.629067 6 log.go:172] (0xc002ad0160) Data frame received for 5 I0316 21:40:19.629108 6 log.go:172] (0xc000b08640) (5) Data frame handling I0316 21:40:19.629230 6 log.go:172] (0xc002ad0160) Data frame received for 3 I0316 21:40:19.629248 6 log.go:172] (0xc001ad8d20) (3) Data frame handling I0316 21:40:19.629276 6 log.go:172] (0xc001ad8d20) (3) Data frame sent I0316 21:40:19.629518 6 log.go:172] (0xc002ad0160) Data frame received for 3 I0316 21:40:19.629554 6 log.go:172] (0xc001ad8d20) (3) Data frame handling I0316 21:40:19.631474 6 log.go:172] (0xc002ad0160) Data frame received for 1 I0316 21:40:19.631508 6 log.go:172] (0xc000b08280) (1) Data frame handling I0316 21:40:19.631523 6 log.go:172] (0xc000b08280) (1) Data frame sent I0316 21:40:19.631569 6 log.go:172] (0xc002ad0160) (0xc000b08280) Stream removed, broadcasting: 1 I0316 21:40:19.631600 6 log.go:172] (0xc002ad0160) Go away received I0316 21:40:19.631714 6 log.go:172] (0xc002ad0160) (0xc000b08280) Stream removed, broadcasting: 1 I0316 21:40:19.631741 6 log.go:172] (0xc002ad0160) (0xc001ad8d20) Stream removed, broadcasting: 3 I0316 21:40:19.631756 6 log.go:172] (0xc002ad0160) (0xc000b08640) Stream removed, broadcasting: 5 Mar 16 21:40:19.631: INFO: Found all expected endpoints: [netserver-0] Mar 16 21:40:19.635: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.84:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3007 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:40:19.635: INFO: >>> kubeConfig: /root/.kube/config I0316 21:40:19.669638 6 log.go:172] (0xc005256370) (0xc000a78d20) Create stream I0316 21:40:19.669662 6 log.go:172] (0xc005256370) (0xc000a78d20) Stream added, broadcasting: 1 I0316 21:40:19.671693 6 log.go:172] (0xc005256370) Reply frame received for 1 I0316 21:40:19.671739 6 log.go:172] (0xc005256370) (0xc0000feaa0) Create stream I0316 21:40:19.671757 6 log.go:172] (0xc005256370) (0xc0000feaa0) Stream added, broadcasting: 3 I0316 21:40:19.672633 6 log.go:172] (0xc005256370) Reply frame received for 3 I0316 21:40:19.672666 6 log.go:172] (0xc005256370) (0xc000b086e0) Create stream I0316 21:40:19.672680 6 log.go:172] (0xc005256370) (0xc000b086e0) Stream added, broadcasting: 5 I0316 21:40:19.673706 6 log.go:172] (0xc005256370) Reply frame received for 5 I0316 21:40:19.742856 6 log.go:172] (0xc005256370) Data frame received for 3 I0316 21:40:19.742896 6 log.go:172] (0xc0000feaa0) (3) Data frame handling I0316 21:40:19.742912 6 log.go:172] (0xc0000feaa0) (3) Data frame sent I0316 21:40:19.742947 6 log.go:172] (0xc005256370) Data frame received for 5 I0316 21:40:19.742995 6 log.go:172] (0xc000b086e0) (5) Data frame handling I0316 21:40:19.743284 6 log.go:172] (0xc005256370) Data frame received for 3 I0316 21:40:19.743302 6 log.go:172] (0xc0000feaa0) (3) Data frame handling I0316 21:40:19.744740 6 log.go:172] (0xc005256370) Data frame received for 1 I0316 21:40:19.744761 6 log.go:172] (0xc000a78d20) (1) Data frame handling I0316 21:40:19.744774 6 log.go:172] (0xc000a78d20) (1) Data frame sent I0316 21:40:19.744931 6 log.go:172] (0xc005256370) (0xc000a78d20) Stream removed, broadcasting: 1 I0316 21:40:19.745015 6 log.go:172] (0xc005256370) Go away received I0316 21:40:19.745081 6 log.go:172] (0xc005256370) (0xc000a78d20) Stream removed, broadcasting: 1 I0316 21:40:19.745249 6 log.go:172] (0xc005256370) (0xc0000feaa0) Stream removed, broadcasting: 3 I0316 21:40:19.745343 6 log.go:172] (0xc005256370) (0xc000b086e0) Stream removed, broadcasting: 5 Mar 16 21:40:19.745: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:40:19.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3007" for this suite. • [SLOW TEST:26.670 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":2066,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:40:19.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:40:19.904: INFO: Waiting up to 5m0s for pod "busybox-user-65534-41fd8df8-4a00-4663-a433-f15141274333" in namespace "security-context-test-4179" to be "success or failure" Mar 16 21:40:19.914: INFO: Pod "busybox-user-65534-41fd8df8-4a00-4663-a433-f15141274333": Phase="Pending", Reason="", readiness=false. Elapsed: 9.853802ms Mar 16 21:40:21.918: INFO: Pod "busybox-user-65534-41fd8df8-4a00-4663-a433-f15141274333": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013832493s Mar 16 21:40:23.923: INFO: Pod "busybox-user-65534-41fd8df8-4a00-4663-a433-f15141274333": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018211896s Mar 16 21:40:23.923: INFO: Pod "busybox-user-65534-41fd8df8-4a00-4663-a433-f15141274333" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:40:23.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4179" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":2079,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:40:23.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:40:23.982: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 16 21:40:26.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2511 create -f -' Mar 16 21:40:30.747: INFO: stderr: "" Mar 16 21:40:30.747: INFO: stdout: "e2e-test-crd-publish-openapi-5750-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 16 21:40:30.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2511 delete e2e-test-crd-publish-openapi-5750-crds test-cr' Mar 16 21:40:30.853: INFO: stderr: "" Mar 16 21:40:30.853: INFO: stdout: "e2e-test-crd-publish-openapi-5750-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 16 21:40:30.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2511 apply -f -' Mar 16 21:40:31.105: INFO: stderr: "" Mar 16 21:40:31.105: INFO: stdout: "e2e-test-crd-publish-openapi-5750-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 16 21:40:31.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2511 delete e2e-test-crd-publish-openapi-5750-crds test-cr' Mar 16 21:40:31.206: INFO: stderr: "" Mar 16 21:40:31.206: INFO: stdout: "e2e-test-crd-publish-openapi-5750-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 16 21:40:31.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5750-crds' Mar 16 21:40:31.427: INFO: stderr: "" Mar 16 21:40:31.427: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5750-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:40:34.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2511" for this suite. • [SLOW TEST:10.346 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":107,"skipped":2092,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:40:34.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 16 21:40:34.347: INFO: Waiting up to 5m0s for pod "pod-4bde0b88-af4b-481e-a93b-3c80548fd03e" in namespace "emptydir-1784" to be "success or failure" Mar 16 21:40:34.364: INFO: Pod "pod-4bde0b88-af4b-481e-a93b-3c80548fd03e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.544506ms Mar 16 21:40:36.368: INFO: Pod "pod-4bde0b88-af4b-481e-a93b-3c80548fd03e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020211172s Mar 16 21:40:38.372: INFO: Pod "pod-4bde0b88-af4b-481e-a93b-3c80548fd03e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024516679s STEP: Saw pod success Mar 16 21:40:38.372: INFO: Pod "pod-4bde0b88-af4b-481e-a93b-3c80548fd03e" satisfied condition "success or failure" Mar 16 21:40:38.375: INFO: Trying to get logs from node jerma-worker pod pod-4bde0b88-af4b-481e-a93b-3c80548fd03e container test-container: STEP: delete the pod Mar 16 21:40:38.395: INFO: Waiting for pod pod-4bde0b88-af4b-481e-a93b-3c80548fd03e to disappear Mar 16 21:40:38.413: INFO: Pod pod-4bde0b88-af4b-481e-a93b-3c80548fd03e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:40:38.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1784" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":2105,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:40:38.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:40:52.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6831" for this suite. • [SLOW TEST:14.075 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":109,"skipped":2119,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:40:52.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-5249 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5249 STEP: Deleting pre-stop pod Mar 16 21:41:05.589: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:41:05.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5249" for this suite. • [SLOW TEST:13.157 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":110,"skipped":2133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:41:05.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-86252eab-dbb5-417c-8087-46b16e3912cb STEP: Creating a pod to test consume secrets Mar 16 21:41:06.091: INFO: Waiting up to 5m0s for pod "pod-secrets-53b57794-1b93-46ac-bbc6-3299b54f9617" in namespace "secrets-5318" to be "success or failure" Mar 16 21:41:06.096: INFO: Pod "pod-secrets-53b57794-1b93-46ac-bbc6-3299b54f9617": Phase="Pending", Reason="", readiness=false. Elapsed: 4.406672ms Mar 16 21:41:08.130: INFO: Pod "pod-secrets-53b57794-1b93-46ac-bbc6-3299b54f9617": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039056525s Mar 16 21:41:10.134: INFO: Pod "pod-secrets-53b57794-1b93-46ac-bbc6-3299b54f9617": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042890409s STEP: Saw pod success Mar 16 21:41:10.134: INFO: Pod "pod-secrets-53b57794-1b93-46ac-bbc6-3299b54f9617" satisfied condition "success or failure" Mar 16 21:41:10.138: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-53b57794-1b93-46ac-bbc6-3299b54f9617 container secret-volume-test: STEP: delete the pod Mar 16 21:41:10.205: INFO: Waiting for pod pod-secrets-53b57794-1b93-46ac-bbc6-3299b54f9617 to disappear Mar 16 21:41:10.209: INFO: Pod pod-secrets-53b57794-1b93-46ac-bbc6-3299b54f9617 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:41:10.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5318" for this suite. STEP: Destroying namespace "secret-namespace-5445" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":2157,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:41:10.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 21:41:10.650: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 21:41:12.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991670, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991670, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991670, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991670, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:41:15.747: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:41:16.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7341" for this suite. STEP: Destroying namespace "webhook-7341-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.047 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":112,"skipped":2159,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:41:16.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 STEP: creating the pod Mar 16 21:41:16.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9939' Mar 16 21:41:16.819: INFO: stderr: "" Mar 16 21:41:16.819: INFO: stdout: "pod/pause created\n" Mar 16 21:41:16.819: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 16 21:41:16.819: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9939" to be "running and ready" Mar 16 21:41:16.826: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.612499ms Mar 16 21:41:18.837: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017587975s Mar 16 21:41:20.841: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.021617373s Mar 16 21:41:20.841: INFO: Pod "pause" satisfied condition "running and ready" Mar 16 21:41:20.841: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 16 21:41:20.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9939' Mar 16 21:41:20.931: INFO: stderr: "" Mar 16 21:41:20.931: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 16 21:41:20.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9939' Mar 16 21:41:21.018: INFO: stderr: "" Mar 16 21:41:21.019: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 16 21:41:21.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9939' Mar 16 21:41:21.115: INFO: stderr: "" Mar 16 21:41:21.115: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 16 21:41:21.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9939' Mar 16 21:41:21.214: INFO: stderr: "" Mar 16 21:41:21.214: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 STEP: using delete to clean up resources Mar 16 21:41:21.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9939' Mar 16 21:41:21.937: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 21:41:21.938: INFO: stdout: "pod \"pause\" force deleted\n" Mar 16 21:41:21.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9939' Mar 16 21:41:22.293: INFO: stderr: "No resources found in kubectl-9939 namespace.\n" Mar 16 21:41:22.293: INFO: stdout: "" Mar 16 21:41:22.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9939 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 21:41:22.384: INFO: stderr: "" Mar 16 21:41:22.384: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:41:22.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9939" for this suite. • [SLOW TEST:6.121 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1379 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":113,"skipped":2175,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:41:22.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:41:22.446: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-189fb409-1416-4191-9f71-9d30b88934ef" in namespace "security-context-test-6838" to be "success or failure" Mar 16 21:41:22.467: INFO: Pod "busybox-readonly-false-189fb409-1416-4191-9f71-9d30b88934ef": Phase="Pending", Reason="", readiness=false. Elapsed: 21.537857ms Mar 16 21:41:24.490: INFO: Pod "busybox-readonly-false-189fb409-1416-4191-9f71-9d30b88934ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044095065s Mar 16 21:41:26.494: INFO: Pod "busybox-readonly-false-189fb409-1416-4191-9f71-9d30b88934ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048597457s Mar 16 21:41:26.494: INFO: Pod "busybox-readonly-false-189fb409-1416-4191-9f71-9d30b88934ef" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:41:26.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6838" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":2182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:41:26.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d02a67a5-d903-47e0-a1e3-b2d9232f43c3 STEP: Creating a pod to test consume configMaps Mar 16 21:41:26.585: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3dde336-4588-4c86-8f6f-d7d3efd5200a" in namespace "projected-9923" to be "success or failure" Mar 16 21:41:26.625: INFO: Pod "pod-projected-configmaps-b3dde336-4588-4c86-8f6f-d7d3efd5200a": Phase="Pending", Reason="", readiness=false. Elapsed: 39.632193ms Mar 16 21:41:28.628: INFO: Pod "pod-projected-configmaps-b3dde336-4588-4c86-8f6f-d7d3efd5200a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043167341s Mar 16 21:41:30.633: INFO: Pod "pod-projected-configmaps-b3dde336-4588-4c86-8f6f-d7d3efd5200a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04778336s STEP: Saw pod success Mar 16 21:41:30.633: INFO: Pod "pod-projected-configmaps-b3dde336-4588-4c86-8f6f-d7d3efd5200a" satisfied condition "success or failure" Mar 16 21:41:30.636: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-b3dde336-4588-4c86-8f6f-d7d3efd5200a container projected-configmap-volume-test: STEP: delete the pod Mar 16 21:41:30.666: INFO: Waiting for pod pod-projected-configmaps-b3dde336-4588-4c86-8f6f-d7d3efd5200a to disappear Mar 16 21:41:30.678: INFO: Pod pod-projected-configmaps-b3dde336-4588-4c86-8f6f-d7d3efd5200a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:41:30.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9923" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":2205,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:41:30.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-f0bd022a-50d3-4b41-9fd8-1cd93e4e6065 Mar 16 21:41:30.768: INFO: Pod name my-hostname-basic-f0bd022a-50d3-4b41-9fd8-1cd93e4e6065: Found 0 pods out of 1 Mar 16 21:41:35.772: INFO: Pod name my-hostname-basic-f0bd022a-50d3-4b41-9fd8-1cd93e4e6065: Found 1 pods out of 1 Mar 16 21:41:35.772: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f0bd022a-50d3-4b41-9fd8-1cd93e4e6065" are running Mar 16 21:41:35.775: INFO: Pod "my-hostname-basic-f0bd022a-50d3-4b41-9fd8-1cd93e4e6065-zkpxj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 21:41:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 21:41:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 21:41:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 21:41:30 +0000 UTC Reason: Message:}]) Mar 16 21:41:35.775: INFO: Trying to dial the pod Mar 16 21:41:40.787: INFO: Controller my-hostname-basic-f0bd022a-50d3-4b41-9fd8-1cd93e4e6065: Got expected result from replica 1 [my-hostname-basic-f0bd022a-50d3-4b41-9fd8-1cd93e4e6065-zkpxj]: "my-hostname-basic-f0bd022a-50d3-4b41-9fd8-1cd93e4e6065-zkpxj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:41:40.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1250" for this suite. • [SLOW TEST:10.110 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":116,"skipped":2207,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:41:40.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1733 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 16 21:41:40.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-2250' Mar 16 21:41:40.952: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 16 21:41:40.953: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1738 Mar 16 21:41:43.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2250' Mar 16 21:41:43.135: INFO: stderr: "" Mar 16 21:41:43.135: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:41:43.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2250" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":117,"skipped":2210,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:41:43.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-46ac4a6f-0e70-4eb9-b27c-eb015c0a5678 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:41:47.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1839" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":2224,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:41:47.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-8224/secret-test-7024cd5c-15fc-440a-b9bb-ee9650451a54 STEP: Creating a pod to test consume secrets Mar 16 21:41:47.343: INFO: Waiting up to 5m0s for pod "pod-configmaps-29136f29-abb2-4c5f-b7f8-c3a83e09c66d" in namespace "secrets-8224" to be "success or failure" Mar 16 21:41:47.349: INFO: Pod "pod-configmaps-29136f29-abb2-4c5f-b7f8-c3a83e09c66d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.2481ms Mar 16 21:41:49.352: INFO: Pod "pod-configmaps-29136f29-abb2-4c5f-b7f8-c3a83e09c66d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009040686s Mar 16 21:41:51.357: INFO: Pod "pod-configmaps-29136f29-abb2-4c5f-b7f8-c3a83e09c66d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013686739s STEP: Saw pod success Mar 16 21:41:51.357: INFO: Pod "pod-configmaps-29136f29-abb2-4c5f-b7f8-c3a83e09c66d" satisfied condition "success or failure" Mar 16 21:41:51.360: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-29136f29-abb2-4c5f-b7f8-c3a83e09c66d container env-test: STEP: delete the pod Mar 16 21:41:51.380: INFO: Waiting for pod pod-configmaps-29136f29-abb2-4c5f-b7f8-c3a83e09c66d to disappear Mar 16 21:41:51.384: INFO: Pod pod-configmaps-29136f29-abb2-4c5f-b7f8-c3a83e09c66d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:41:51.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8224" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:41:51.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:41:51.518: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 16 21:41:51.534: INFO: Number of nodes with available pods: 0 Mar 16 21:41:51.534: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 16 21:41:51.597: INFO: Number of nodes with available pods: 0 Mar 16 21:41:51.597: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:41:52.601: INFO: Number of nodes with available pods: 0 Mar 16 21:41:52.601: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:41:53.601: INFO: Number of nodes with available pods: 0 Mar 16 21:41:53.601: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:41:54.602: INFO: Number of nodes with available pods: 0 Mar 16 21:41:54.602: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:41:55.602: INFO: Number of nodes with available pods: 1 Mar 16 21:41:55.602: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 16 21:41:55.652: INFO: Number of nodes with available pods: 1 Mar 16 21:41:55.653: INFO: Number of running nodes: 0, number of available pods: 1 Mar 16 21:41:56.657: INFO: Number of nodes with available pods: 0 Mar 16 21:41:56.657: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 16 21:41:56.666: INFO: Number of nodes with available pods: 0 Mar 16 21:41:56.666: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:41:57.670: INFO: Number of nodes with available pods: 0 Mar 16 21:41:57.670: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:41:58.669: INFO: Number of nodes with available pods: 0 Mar 16 21:41:58.669: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:41:59.670: INFO: Number of nodes with available pods: 0 Mar 16 21:41:59.670: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:00.671: INFO: Number of nodes with available pods: 0 Mar 16 21:42:00.671: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:01.670: INFO: Number of nodes with available pods: 0 Mar 16 21:42:01.670: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:02.671: INFO: Number of nodes with available pods: 0 Mar 16 21:42:02.671: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:03.670: INFO: Number of nodes with available pods: 0 Mar 16 21:42:03.670: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:04.671: INFO: Number of nodes with available pods: 0 Mar 16 21:42:04.671: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:05.671: INFO: Number of nodes with available pods: 0 Mar 16 21:42:05.671: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:06.670: INFO: Number of nodes with available pods: 0 Mar 16 21:42:06.671: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:07.670: INFO: Number of nodes with available pods: 0 Mar 16 21:42:07.670: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:08.669: INFO: Number of nodes with available pods: 0 Mar 16 21:42:08.669: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:09.670: INFO: Number of nodes with available pods: 0 Mar 16 21:42:09.670: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:10.670: INFO: Number of nodes with available pods: 0 Mar 16 21:42:10.670: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:11.688: INFO: Number of nodes with available pods: 0 Mar 16 21:42:11.688: INFO: Node jerma-worker is running more than one daemon pod Mar 16 21:42:12.671: INFO: Number of nodes with available pods: 1 Mar 16 21:42:12.671: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4448, will wait for the garbage collector to delete the pods Mar 16 21:42:12.736: INFO: Deleting DaemonSet.extensions daemon-set took: 6.088186ms Mar 16 21:42:13.036: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.308695ms Mar 16 21:42:16.040: INFO: Number of nodes with available pods: 0 Mar 16 21:42:16.040: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 21:42:16.043: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4448/daemonsets","resourceVersion":"325338"},"items":null} Mar 16 21:42:16.046: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4448/pods","resourceVersion":"325338"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:42:16.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4448" for this suite. • [SLOW TEST:24.715 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":120,"skipped":2260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:42:16.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:42:16.152: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 16 21:42:19.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6017 create -f -' Mar 16 21:42:22.130: INFO: stderr: "" Mar 16 21:42:22.130: INFO: stdout: "e2e-test-crd-publish-openapi-8708-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 16 21:42:22.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6017 delete e2e-test-crd-publish-openapi-8708-crds test-cr' Mar 16 21:42:22.232: INFO: stderr: "" Mar 16 21:42:22.232: INFO: stdout: "e2e-test-crd-publish-openapi-8708-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 16 21:42:22.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6017 apply -f -' Mar 16 21:42:22.471: INFO: stderr: "" Mar 16 21:42:22.471: INFO: stdout: "e2e-test-crd-publish-openapi-8708-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 16 21:42:22.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6017 delete e2e-test-crd-publish-openapi-8708-crds test-cr' Mar 16 21:42:22.586: INFO: stderr: "" Mar 16 21:42:22.586: INFO: stdout: "e2e-test-crd-publish-openapi-8708-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 16 21:42:22.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8708-crds' Mar 16 21:42:22.876: INFO: stderr: "" Mar 16 21:42:22.877: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8708-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:42:24.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6017" for this suite. • [SLOW TEST:8.619 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":121,"skipped":2292,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:42:24.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 16 21:42:24.853: INFO: Waiting up to 5m0s for pod "pod-782aaa19-b667-40be-9d56-85c9210b0e17" in namespace "emptydir-8531" to be "success or failure" Mar 16 21:42:24.859: INFO: Pod "pod-782aaa19-b667-40be-9d56-85c9210b0e17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164812ms Mar 16 21:42:26.862: INFO: Pod "pod-782aaa19-b667-40be-9d56-85c9210b0e17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00936067s Mar 16 21:42:28.866: INFO: Pod "pod-782aaa19-b667-40be-9d56-85c9210b0e17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013873s STEP: Saw pod success Mar 16 21:42:28.866: INFO: Pod "pod-782aaa19-b667-40be-9d56-85c9210b0e17" satisfied condition "success or failure" Mar 16 21:42:28.870: INFO: Trying to get logs from node jerma-worker pod pod-782aaa19-b667-40be-9d56-85c9210b0e17 container test-container: STEP: delete the pod Mar 16 21:42:28.889: INFO: Waiting for pod pod-782aaa19-b667-40be-9d56-85c9210b0e17 to disappear Mar 16 21:42:28.912: INFO: Pod pod-782aaa19-b667-40be-9d56-85c9210b0e17 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:42:28.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8531" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2303,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:42:28.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-acbb3d79-3a6c-4dc8-bc57-e51abb35345f STEP: Creating secret with name s-test-opt-upd-1e2c9af9-c160-483c-bb1c-8ff7d60b4c24 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-acbb3d79-3a6c-4dc8-bc57-e51abb35345f STEP: Updating secret s-test-opt-upd-1e2c9af9-c160-483c-bb1c-8ff7d60b4c24 STEP: Creating secret with name s-test-opt-create-04cdeea1-ee81-4584-be94-6b09a771a65e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:43:49.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9626" for this suite. • [SLOW TEST:80.506 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2311,"failed":0} SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:43:49.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:43:49.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3288" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":124,"skipped":2315,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:43:49.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:43:49.568: INFO: Creating ReplicaSet my-hostname-basic-09b3a7ae-b299-4d49-a633-845151b99465 Mar 16 21:43:49.605: INFO: Pod name my-hostname-basic-09b3a7ae-b299-4d49-a633-845151b99465: Found 0 pods out of 1 Mar 16 21:43:54.608: INFO: Pod name my-hostname-basic-09b3a7ae-b299-4d49-a633-845151b99465: Found 1 pods out of 1 Mar 16 21:43:54.608: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-09b3a7ae-b299-4d49-a633-845151b99465" is running Mar 16 21:43:54.611: INFO: Pod "my-hostname-basic-09b3a7ae-b299-4d49-a633-845151b99465-j8sc7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 21:43:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 21:43:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 21:43:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 21:43:49 +0000 UTC Reason: Message:}]) Mar 16 21:43:54.611: INFO: Trying to dial the pod Mar 16 21:43:59.623: INFO: Controller my-hostname-basic-09b3a7ae-b299-4d49-a633-845151b99465: Got expected result from replica 1 [my-hostname-basic-09b3a7ae-b299-4d49-a633-845151b99465-j8sc7]: "my-hostname-basic-09b3a7ae-b299-4d49-a633-845151b99465-j8sc7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:43:59.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-832" for this suite. • [SLOW TEST:10.105 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":125,"skipped":2323,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:43:59.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 16 21:43:59.723: INFO: Waiting up to 5m0s for pod "pod-df9d9cad-6c61-493d-9ce8-8880df0dc4e0" in namespace "emptydir-9496" to be "success or failure" Mar 16 21:43:59.728: INFO: Pod "pod-df9d9cad-6c61-493d-9ce8-8880df0dc4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074205ms Mar 16 21:44:01.755: INFO: Pod "pod-df9d9cad-6c61-493d-9ce8-8880df0dc4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031340734s Mar 16 21:44:03.759: INFO: Pod "pod-df9d9cad-6c61-493d-9ce8-8880df0dc4e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035482853s STEP: Saw pod success Mar 16 21:44:03.759: INFO: Pod "pod-df9d9cad-6c61-493d-9ce8-8880df0dc4e0" satisfied condition "success or failure" Mar 16 21:44:03.762: INFO: Trying to get logs from node jerma-worker2 pod pod-df9d9cad-6c61-493d-9ce8-8880df0dc4e0 container test-container: STEP: delete the pod Mar 16 21:44:03.779: INFO: Waiting for pod pod-df9d9cad-6c61-493d-9ce8-8880df0dc4e0 to disappear Mar 16 21:44:03.783: INFO: Pod pod-df9d9cad-6c61-493d-9ce8-8880df0dc4e0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:44:03.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9496" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2333,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:44:03.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7899 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 21:44:03.868: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 16 21:44:31.955: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.67 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7899 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:44:31.955: INFO: >>> kubeConfig: /root/.kube/config I0316 21:44:31.998853 6 log.go:172] (0xc002ad0000) (0xc001466960) Create stream I0316 21:44:31.998888 6 log.go:172] (0xc002ad0000) (0xc001466960) Stream added, broadcasting: 1 I0316 21:44:32.000758 6 log.go:172] (0xc002ad0000) Reply frame received for 1 I0316 21:44:32.000809 6 log.go:172] (0xc002ad0000) (0xc001ad88c0) Create stream I0316 21:44:32.000824 6 log.go:172] (0xc002ad0000) (0xc001ad88c0) Stream added, broadcasting: 3 I0316 21:44:32.001879 6 log.go:172] (0xc002ad0000) Reply frame received for 3 I0316 21:44:32.001912 6 log.go:172] (0xc002ad0000) (0xc001ad8a00) Create stream I0316 21:44:32.001925 6 log.go:172] (0xc002ad0000) (0xc001ad8a00) Stream added, broadcasting: 5 I0316 21:44:32.002780 6 log.go:172] (0xc002ad0000) Reply frame received for 5 I0316 21:44:33.097652 6 log.go:172] (0xc002ad0000) Data frame received for 3 I0316 21:44:33.097698 6 log.go:172] (0xc001ad88c0) (3) Data frame handling I0316 21:44:33.097732 6 log.go:172] (0xc001ad88c0) (3) Data frame sent I0316 21:44:33.097961 6 log.go:172] (0xc002ad0000) Data frame received for 3 I0316 21:44:33.098000 6 log.go:172] (0xc001ad88c0) (3) Data frame handling I0316 21:44:33.098360 6 log.go:172] (0xc002ad0000) Data frame received for 5 I0316 21:44:33.098399 6 log.go:172] (0xc001ad8a00) (5) Data frame handling I0316 21:44:33.100451 6 log.go:172] (0xc002ad0000) Data frame received for 1 I0316 21:44:33.100479 6 log.go:172] (0xc001466960) (1) Data frame handling I0316 21:44:33.100494 6 log.go:172] (0xc001466960) (1) Data frame sent I0316 21:44:33.100519 6 log.go:172] (0xc002ad0000) (0xc001466960) Stream removed, broadcasting: 1 I0316 21:44:33.100541 6 log.go:172] (0xc002ad0000) Go away received I0316 21:44:33.100735 6 log.go:172] (0xc002ad0000) (0xc001466960) Stream removed, broadcasting: 1 I0316 21:44:33.100762 6 log.go:172] (0xc002ad0000) (0xc001ad88c0) Stream removed, broadcasting: 3 I0316 21:44:33.100775 6 log.go:172] (0xc002ad0000) (0xc001ad8a00) Stream removed, broadcasting: 5 Mar 16 21:44:33.100: INFO: Found all expected endpoints: [netserver-0] Mar 16 21:44:33.104: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.95 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7899 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:44:33.104: INFO: >>> kubeConfig: /root/.kube/config I0316 21:44:33.137590 6 log.go:172] (0xc002ad0580) (0xc001466d20) Create stream I0316 21:44:33.137637 6 log.go:172] (0xc002ad0580) (0xc001466d20) Stream added, broadcasting: 1 I0316 21:44:33.139921 6 log.go:172] (0xc002ad0580) Reply frame received for 1 I0316 21:44:33.139961 6 log.go:172] (0xc002ad0580) (0xc00055cfa0) Create stream I0316 21:44:33.139975 6 log.go:172] (0xc002ad0580) (0xc00055cfa0) Stream added, broadcasting: 3 I0316 21:44:33.141098 6 log.go:172] (0xc002ad0580) Reply frame received for 3 I0316 21:44:33.141303 6 log.go:172] (0xc002ad0580) (0xc000a78be0) Create stream I0316 21:44:33.141330 6 log.go:172] (0xc002ad0580) (0xc000a78be0) Stream added, broadcasting: 5 I0316 21:44:33.142399 6 log.go:172] (0xc002ad0580) Reply frame received for 5 I0316 21:44:34.220778 6 log.go:172] (0xc002ad0580) Data frame received for 3 I0316 21:44:34.220829 6 log.go:172] (0xc00055cfa0) (3) Data frame handling I0316 21:44:34.220853 6 log.go:172] (0xc00055cfa0) (3) Data frame sent I0316 21:44:34.220872 6 log.go:172] (0xc002ad0580) Data frame received for 3 I0316 21:44:34.220894 6 log.go:172] (0xc00055cfa0) (3) Data frame handling I0316 21:44:34.221339 6 log.go:172] (0xc002ad0580) Data frame received for 5 I0316 21:44:34.221370 6 log.go:172] (0xc000a78be0) (5) Data frame handling I0316 21:44:34.223210 6 log.go:172] (0xc002ad0580) Data frame received for 1 I0316 21:44:34.223242 6 log.go:172] (0xc001466d20) (1) Data frame handling I0316 21:44:34.223281 6 log.go:172] (0xc001466d20) (1) Data frame sent I0316 21:44:34.223311 6 log.go:172] (0xc002ad0580) (0xc001466d20) Stream removed, broadcasting: 1 I0316 21:44:34.223435 6 log.go:172] (0xc002ad0580) (0xc001466d20) Stream removed, broadcasting: 1 I0316 21:44:34.223511 6 log.go:172] (0xc002ad0580) (0xc00055cfa0) Stream removed, broadcasting: 3 I0316 21:44:34.223685 6 log.go:172] (0xc002ad0580) Go away received I0316 21:44:34.223748 6 log.go:172] (0xc002ad0580) (0xc000a78be0) Stream removed, broadcasting: 5 Mar 16 21:44:34.223: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:44:34.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7899" for this suite. • [SLOW TEST:30.444 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2334,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:44:34.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 16 21:44:39.335: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:44:39.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5983" for this suite. • [SLOW TEST:5.216 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":128,"skipped":2339,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:44:39.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5150 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5150;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5150 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5150;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5150.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5150.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5150.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5150.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5150.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5150.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5150.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5150.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5150.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5150.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5150.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.197.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.197.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.197.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.197.67_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5150 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5150;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5150 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5150;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5150.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5150.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5150.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5150.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5150.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5150.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5150.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5150.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5150.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5150.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5150.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5150.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.197.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.197.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.197.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.197.67_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 21:44:45.792: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.796: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.799: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.803: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.807: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.810: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.813: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.817: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.833: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.836: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.839: INFO: Unable to read jessie_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.842: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.844: INFO: Unable to read jessie_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.847: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.849: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.852: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:45.870: INFO: Lookups using dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5150 wheezy_tcp@dns-test-service.dns-5150 wheezy_udp@dns-test-service.dns-5150.svc wheezy_tcp@dns-test-service.dns-5150.svc wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5150 jessie_tcp@dns-test-service.dns-5150 jessie_udp@dns-test-service.dns-5150.svc jessie_tcp@dns-test-service.dns-5150.svc jessie_udp@_http._tcp.dns-test-service.dns-5150.svc jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc] Mar 16 21:44:50.875: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.879: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.883: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.886: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.890: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.893: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.896: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.923: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.926: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.929: INFO: Unable to read jessie_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.932: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.936: INFO: Unable to read jessie_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.939: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.942: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.946: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:50.966: INFO: Lookups using dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5150 wheezy_tcp@dns-test-service.dns-5150 wheezy_udp@dns-test-service.dns-5150.svc wheezy_tcp@dns-test-service.dns-5150.svc wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5150 jessie_tcp@dns-test-service.dns-5150 jessie_udp@dns-test-service.dns-5150.svc jessie_tcp@dns-test-service.dns-5150.svc jessie_udp@_http._tcp.dns-test-service.dns-5150.svc jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc] Mar 16 21:44:55.875: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.882: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.887: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.890: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.893: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.895: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.897: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.917: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.920: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.923: INFO: Unable to read jessie_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.926: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.929: INFO: Unable to read jessie_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.932: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.935: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.938: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:44:55.956: INFO: Lookups using dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5150 wheezy_tcp@dns-test-service.dns-5150 wheezy_udp@dns-test-service.dns-5150.svc wheezy_tcp@dns-test-service.dns-5150.svc wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5150 jessie_tcp@dns-test-service.dns-5150 jessie_udp@dns-test-service.dns-5150.svc jessie_tcp@dns-test-service.dns-5150.svc jessie_udp@_http._tcp.dns-test-service.dns-5150.svc jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc] Mar 16 21:45:00.875: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.879: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.882: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.885: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.889: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.892: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.896: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.920: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.923: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.926: INFO: Unable to read jessie_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.929: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.932: INFO: Unable to read jessie_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.935: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.938: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.940: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:00.959: INFO: Lookups using dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5150 wheezy_tcp@dns-test-service.dns-5150 wheezy_udp@dns-test-service.dns-5150.svc wheezy_tcp@dns-test-service.dns-5150.svc wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5150 jessie_tcp@dns-test-service.dns-5150 jessie_udp@dns-test-service.dns-5150.svc jessie_tcp@dns-test-service.dns-5150.svc jessie_udp@_http._tcp.dns-test-service.dns-5150.svc jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc] Mar 16 21:45:05.875: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.879: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.883: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.887: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.890: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.894: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.897: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.900: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.924: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.927: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.930: INFO: Unable to read jessie_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.934: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.937: INFO: Unable to read jessie_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.941: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.944: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.947: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:05.972: INFO: Lookups using dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5150 wheezy_tcp@dns-test-service.dns-5150 wheezy_udp@dns-test-service.dns-5150.svc wheezy_tcp@dns-test-service.dns-5150.svc wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5150 jessie_tcp@dns-test-service.dns-5150 jessie_udp@dns-test-service.dns-5150.svc jessie_tcp@dns-test-service.dns-5150.svc jessie_udp@_http._tcp.dns-test-service.dns-5150.svc jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc] Mar 16 21:45:10.875: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.879: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.883: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.886: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.890: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.893: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.896: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.920: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.923: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.927: INFO: Unable to read jessie_udp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.930: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150 from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.933: INFO: Unable to read jessie_udp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.936: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.939: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.942: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: the server could not find the requested resource (get pods dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca) Mar 16 21:45:10.960: INFO: Lookups using dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5150 wheezy_tcp@dns-test-service.dns-5150 wheezy_udp@dns-test-service.dns-5150.svc wheezy_tcp@dns-test-service.dns-5150.svc wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5150 jessie_tcp@dns-test-service.dns-5150 jessie_udp@dns-test-service.dns-5150.svc jessie_tcp@dns-test-service.dns-5150.svc jessie_udp@_http._tcp.dns-test-service.dns-5150.svc jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc] Mar 16 21:45:15.879: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca: Get https://172.30.12.66:32770/api/v1/namespaces/dns-5150/pods/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca/proxy/results/wheezy_tcp@dns-test-service: stream error: stream ID 6377; INTERNAL_ERROR Mar 16 21:45:15.954: INFO: Lookups using dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca failed for: [wheezy_tcp@dns-test-service] Mar 16 21:45:20.967: INFO: DNS probes using dns-5150/dns-test-f66e8e02-895b-419b-af5f-5e33a3dcf1ca succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:45:22.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5150" for this suite. • [SLOW TEST:43.292 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":129,"skipped":2348,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:45:22.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 16 21:45:30.906: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 21:45:30.922: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 21:45:32.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 21:45:32.927: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 21:45:34.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 21:45:34.926: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 21:45:36.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 21:45:36.927: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 21:45:38.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 21:45:38.926: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 21:45:40.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 21:45:40.926: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:45:40.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1286" for this suite. • [SLOW TEST:18.191 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2370,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:45:40.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464 STEP: creating an pod Mar 16 21:45:40.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8328 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 16 21:45:41.069: INFO: stderr: "" Mar 16 21:45:41.069: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 16 21:45:41.069: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 16 21:45:41.069: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8328" to be "running and ready, or succeeded" Mar 16 21:45:41.073: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.602677ms Mar 16 21:45:43.134: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06450741s Mar 16 21:45:45.138: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.068592576s Mar 16 21:45:45.138: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 16 21:45:45.138: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 16 21:45:45.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8328' Mar 16 21:45:45.311: INFO: stderr: "" Mar 16 21:45:45.311: INFO: stdout: "I0316 21:45:43.442226 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/7l7 555\nI0316 21:45:43.642519 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/vxf 303\nI0316 21:45:43.843330 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/qpt 583\nI0316 21:45:44.042475 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/7nt 373\nI0316 21:45:44.242389 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/898 529\nI0316 21:45:44.442437 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/kt2p 522\nI0316 21:45:44.642400 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/n4n 458\nI0316 21:45:44.842393 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/jqgc 291\nI0316 21:45:45.042506 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/x9zb 532\nI0316 21:45:45.242359 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/c4d 444\n" STEP: limiting log lines Mar 16 21:45:45.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8328 --tail=1' Mar 16 21:45:45.428: INFO: stderr: "" Mar 16 21:45:45.428: INFO: stdout: "I0316 21:45:45.242359 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/c4d 444\n" Mar 16 21:45:45.428: INFO: got output "I0316 21:45:45.242359 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/c4d 444\n" STEP: limiting log bytes Mar 16 21:45:45.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8328 --limit-bytes=1' Mar 16 21:45:45.534: INFO: stderr: "" Mar 16 21:45:45.535: INFO: stdout: "I" Mar 16 21:45:45.535: INFO: got output "I" STEP: exposing timestamps Mar 16 21:45:45.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8328 --tail=1 --timestamps' Mar 16 21:45:45.636: INFO: stderr: "" Mar 16 21:45:45.636: INFO: stdout: "2020-03-16T21:45:45.442667113Z I0316 21:45:45.442444 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/7wrd 284\n" Mar 16 21:45:45.636: INFO: got output "2020-03-16T21:45:45.442667113Z I0316 21:45:45.442444 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/7wrd 284\n" STEP: restricting to a time range Mar 16 21:45:48.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8328 --since=1s' Mar 16 21:45:48.254: INFO: stderr: "" Mar 16 21:45:48.254: INFO: stdout: "I0316 21:45:47.442468 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/k5x 367\nI0316 21:45:47.642453 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/nnhn 355\nI0316 21:45:47.842392 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/t4v 338\nI0316 21:45:48.042493 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/fmrm 571\nI0316 21:45:48.242387 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/f5qk 272\n" Mar 16 21:45:48.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8328 --since=24h' Mar 16 21:45:48.352: INFO: stderr: "" Mar 16 21:45:48.352: INFO: stdout: "I0316 21:45:43.442226 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/7l7 555\nI0316 21:45:43.642519 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/vxf 303\nI0316 21:45:43.843330 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/qpt 583\nI0316 21:45:44.042475 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/7nt 373\nI0316 21:45:44.242389 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/898 529\nI0316 21:45:44.442437 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/kt2p 522\nI0316 21:45:44.642400 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/n4n 458\nI0316 21:45:44.842393 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/jqgc 291\nI0316 21:45:45.042506 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/x9zb 532\nI0316 21:45:45.242359 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/c4d 444\nI0316 21:45:45.442444 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/7wrd 284\nI0316 21:45:45.642419 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/vfcn 454\nI0316 21:45:45.842377 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/zzn 557\nI0316 21:45:46.042391 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/2rd6 382\nI0316 21:45:46.242439 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/2jn 390\nI0316 21:45:46.442447 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/s6r 327\nI0316 21:45:46.642443 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/5m8 359\nI0316 21:45:46.842972 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/42pq 225\nI0316 21:45:47.042428 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/6wp 556\nI0316 21:45:47.242392 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/bm7 588\nI0316 21:45:47.442468 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/k5x 367\nI0316 21:45:47.642453 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/nnhn 355\nI0316 21:45:47.842392 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/t4v 338\nI0316 21:45:48.042493 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/fmrm 571\nI0316 21:45:48.242387 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/f5qk 272\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470 Mar 16 21:45:48.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8328' Mar 16 21:45:59.266: INFO: stderr: "" Mar 16 21:45:59.266: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:45:59.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8328" for this suite. • [SLOW TEST:18.336 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":131,"skipped":2391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:45:59.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:45:59.359: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.45994ms) Mar 16 21:45:59.362: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.250925ms) Mar 16 21:45:59.365: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.406272ms) Mar 16 21:45:59.369: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.586704ms) Mar 16 21:45:59.372: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.144701ms) Mar 16 21:45:59.377: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.832527ms) Mar 16 21:45:59.383: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.498774ms) Mar 16 21:45:59.390: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 7.648112ms) Mar 16 21:45:59.393: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.827034ms) Mar 16 21:45:59.396: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.487528ms) Mar 16 21:45:59.398: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.362414ms) Mar 16 21:45:59.400: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.166201ms) Mar 16 21:45:59.403: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.848185ms) Mar 16 21:45:59.406: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.148244ms) Mar 16 21:45:59.409: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.658717ms) Mar 16 21:45:59.412: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.219988ms) Mar 16 21:45:59.415: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.116089ms) Mar 16 21:45:59.419: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.322154ms) Mar 16 21:45:59.422: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.221903ms) Mar 16 21:45:59.470: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 48.425721ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:45:59.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6135" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":132,"skipped":2430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:45:59.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:45:59.537: INFO: Creating deployment "test-recreate-deployment" Mar 16 21:45:59.544: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 16 21:45:59.556: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 16 21:46:01.563: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 16 21:46:01.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991959, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991959, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991959, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991959, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 21:46:03.569: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 16 21:46:03.575: INFO: Updating deployment test-recreate-deployment Mar 16 21:46:03.575: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 16 21:46:04.026: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5620 /apis/apps/v1/namespaces/deployment-5620/deployments/test-recreate-deployment ecc18c81-9915-4a13-8e14-9f31a994d12a 326458 2 2020-03-16 21:45:59 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004174ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-16 21:46:03 +0000 UTC,LastTransitionTime:2020-03-16 21:46:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-16 21:46:03 +0000 UTC,LastTransitionTime:2020-03-16 21:45:59 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 16 21:46:04.029: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-5620 /apis/apps/v1/namespaces/deployment-5620/replicasets/test-recreate-deployment-5f94c574ff 8639eb0a-09fa-40b3-8f5d-fbe624ae985f 326456 1 2020-03-16 21:46:03 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment ecc18c81-9915-4a13-8e14-9f31a994d12a 0xc004175427 0xc004175428}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041754c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 21:46:04.029: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 16 21:46:04.029: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-5620 /apis/apps/v1/namespaces/deployment-5620/replicasets/test-recreate-deployment-799c574856 70786d7b-99d7-4c51-a6ab-fb8c882eef88 326447 2 2020-03-16 21:45:59 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment ecc18c81-9915-4a13-8e14-9f31a994d12a 0xc004175537 0xc004175538}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041755d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 21:46:04.081: INFO: Pod "test-recreate-deployment-5f94c574ff-pkjhd" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-pkjhd test-recreate-deployment-5f94c574ff- deployment-5620 /api/v1/namespaces/deployment-5620/pods/test-recreate-deployment-5f94c574ff-pkjhd 8e1fc442-182f-43e6-b2c1-a03525efc6e8 326459 0 2020-03-16 21:46:03 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 8639eb0a-09fa-40b3-8f5d-fbe624ae985f 0xc004175d97 0xc004175d98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-btdh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-btdh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-btdh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:46:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:46:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:46:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 21:46:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-16 21:46:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:46:04.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5620" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":133,"skipped":2458,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:46:04.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-dff530a1-25a6-4d84-a7ec-42d6bc46b380 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:46:04.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4930" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":134,"skipped":2470,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:46:04.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 16 21:46:04.220: INFO: Waiting up to 5m0s for pod "client-containers-59dc147f-7a68-43b3-9135-a68bf017c444" in namespace "containers-1566" to be "success or failure" Mar 16 21:46:04.278: INFO: Pod "client-containers-59dc147f-7a68-43b3-9135-a68bf017c444": Phase="Pending", Reason="", readiness=false. Elapsed: 58.079331ms Mar 16 21:46:06.289: INFO: Pod "client-containers-59dc147f-7a68-43b3-9135-a68bf017c444": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0697642s Mar 16 21:46:08.293: INFO: Pod "client-containers-59dc147f-7a68-43b3-9135-a68bf017c444": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073456654s Mar 16 21:46:10.298: INFO: Pod "client-containers-59dc147f-7a68-43b3-9135-a68bf017c444": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077834963s STEP: Saw pod success Mar 16 21:46:10.298: INFO: Pod "client-containers-59dc147f-7a68-43b3-9135-a68bf017c444" satisfied condition "success or failure" Mar 16 21:46:10.301: INFO: Trying to get logs from node jerma-worker2 pod client-containers-59dc147f-7a68-43b3-9135-a68bf017c444 container test-container: STEP: delete the pod Mar 16 21:46:10.330: INFO: Waiting for pod client-containers-59dc147f-7a68-43b3-9135-a68bf017c444 to disappear Mar 16 21:46:10.334: INFO: Pod client-containers-59dc147f-7a68-43b3-9135-a68bf017c444 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:46:10.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1566" for this suite. • [SLOW TEST:6.197 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2483,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:46:10.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 21:46:11.171: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 21:46:13.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991971, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991971, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991971, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719991971, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:46:16.212: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:46:16.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4147" for this suite. STEP: Destroying namespace "webhook-4147-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.427 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":136,"skipped":2490,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:46:16.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 21:46:16.937: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2bd1c408-c4a9-4f6f-90e0-5f3f3334207e" in namespace "projected-6676" to be "success or failure" Mar 16 21:46:17.092: INFO: Pod "downwardapi-volume-2bd1c408-c4a9-4f6f-90e0-5f3f3334207e": Phase="Pending", Reason="", readiness=false. Elapsed: 155.533137ms Mar 16 21:46:19.116: INFO: Pod "downwardapi-volume-2bd1c408-c4a9-4f6f-90e0-5f3f3334207e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179108446s Mar 16 21:46:21.506: INFO: Pod "downwardapi-volume-2bd1c408-c4a9-4f6f-90e0-5f3f3334207e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568953329s Mar 16 21:46:23.509: INFO: Pod "downwardapi-volume-2bd1c408-c4a9-4f6f-90e0-5f3f3334207e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.572208876s STEP: Saw pod success Mar 16 21:46:23.509: INFO: Pod "downwardapi-volume-2bd1c408-c4a9-4f6f-90e0-5f3f3334207e" satisfied condition "success or failure" Mar 16 21:46:23.512: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2bd1c408-c4a9-4f6f-90e0-5f3f3334207e container client-container: STEP: delete the pod Mar 16 21:46:23.551: INFO: Waiting for pod downwardapi-volume-2bd1c408-c4a9-4f6f-90e0-5f3f3334207e to disappear Mar 16 21:46:23.557: INFO: Pod downwardapi-volume-2bd1c408-c4a9-4f6f-90e0-5f3f3334207e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:46:23.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6676" for this suite. • [SLOW TEST:6.781 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2503,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:46:23.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 16 21:46:23.643: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:46:31.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8108" for this suite. • [SLOW TEST:7.839 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":138,"skipped":2508,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:46:31.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-8902bfb2-fe7f-4be5-a180-57ca07781908 STEP: Creating a pod to test consume secrets Mar 16 21:46:31.490: INFO: Waiting up to 5m0s for pod "pod-secrets-afed88ee-bc4e-484d-b733-7585c216473e" in namespace "secrets-5097" to be "success or failure" Mar 16 21:46:31.500: INFO: Pod "pod-secrets-afed88ee-bc4e-484d-b733-7585c216473e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.13156ms Mar 16 21:46:33.504: INFO: Pod "pod-secrets-afed88ee-bc4e-484d-b733-7585c216473e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0145073s Mar 16 21:46:35.508: INFO: Pod "pod-secrets-afed88ee-bc4e-484d-b733-7585c216473e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018522758s STEP: Saw pod success Mar 16 21:46:35.508: INFO: Pod "pod-secrets-afed88ee-bc4e-484d-b733-7585c216473e" satisfied condition "success or failure" Mar 16 21:46:35.511: INFO: Trying to get logs from node jerma-worker pod pod-secrets-afed88ee-bc4e-484d-b733-7585c216473e container secret-volume-test: STEP: delete the pod Mar 16 21:46:35.532: INFO: Waiting for pod pod-secrets-afed88ee-bc4e-484d-b733-7585c216473e to disappear Mar 16 21:46:35.565: INFO: Pod pod-secrets-afed88ee-bc4e-484d-b733-7585c216473e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:46:35.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5097" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:46:35.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-497bcf6f-6598-4298-90a1-485d0bc68240 STEP: Creating a pod to test consume configMaps Mar 16 21:46:35.634: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5318f3e1-93f6-443d-96d6-feb41968b555" in namespace "projected-8536" to be "success or failure" Mar 16 21:46:35.663: INFO: Pod "pod-projected-configmaps-5318f3e1-93f6-443d-96d6-feb41968b555": Phase="Pending", Reason="", readiness=false. Elapsed: 28.648848ms Mar 16 21:46:37.770: INFO: Pod "pod-projected-configmaps-5318f3e1-93f6-443d-96d6-feb41968b555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135761717s Mar 16 21:46:39.774: INFO: Pod "pod-projected-configmaps-5318f3e1-93f6-443d-96d6-feb41968b555": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140280488s STEP: Saw pod success Mar 16 21:46:39.774: INFO: Pod "pod-projected-configmaps-5318f3e1-93f6-443d-96d6-feb41968b555" satisfied condition "success or failure" Mar 16 21:46:39.778: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-5318f3e1-93f6-443d-96d6-feb41968b555 container projected-configmap-volume-test: STEP: delete the pod Mar 16 21:46:39.817: INFO: Waiting for pod pod-projected-configmaps-5318f3e1-93f6-443d-96d6-feb41968b555 to disappear Mar 16 21:46:39.853: INFO: Pod pod-projected-configmaps-5318f3e1-93f6-443d-96d6-feb41968b555 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:46:39.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8536" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2532,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:46:39.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 16 21:46:39.925: INFO: Waiting up to 5m0s for pod "var-expansion-13cc3fa2-c6d8-461d-bc35-b05e991f6a3c" in namespace "var-expansion-5372" to be "success or failure" Mar 16 21:46:39.985: INFO: Pod "var-expansion-13cc3fa2-c6d8-461d-bc35-b05e991f6a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 59.671306ms Mar 16 21:46:41.988: INFO: Pod "var-expansion-13cc3fa2-c6d8-461d-bc35-b05e991f6a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063318741s Mar 16 21:46:43.992: INFO: Pod "var-expansion-13cc3fa2-c6d8-461d-bc35-b05e991f6a3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06712861s STEP: Saw pod success Mar 16 21:46:43.992: INFO: Pod "var-expansion-13cc3fa2-c6d8-461d-bc35-b05e991f6a3c" satisfied condition "success or failure" Mar 16 21:46:43.995: INFO: Trying to get logs from node jerma-worker pod var-expansion-13cc3fa2-c6d8-461d-bc35-b05e991f6a3c container dapi-container: STEP: delete the pod Mar 16 21:46:44.014: INFO: Waiting for pod var-expansion-13cc3fa2-c6d8-461d-bc35-b05e991f6a3c to disappear Mar 16 21:46:44.018: INFO: Pod var-expansion-13cc3fa2-c6d8-461d-bc35-b05e991f6a3c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:46:44.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5372" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2534,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:46:44.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:46:48.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3239" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2539,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:46:48.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1692 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 16 21:46:48.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3476' Mar 16 21:46:48.301: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 16 21:46:48.301: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Mar 16 21:46:48.324: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Mar 16 21:46:48.333: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 16 21:46:48.342: INFO: scanned /root for discovery docs: Mar 16 21:46:48.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3476' Mar 16 21:47:04.249: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 16 21:47:04.249: INFO: stdout: "Created e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff\nScaling up e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 16 21:47:04.249: INFO: stdout: "Created e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff\nScaling up e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 16 21:47:04.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-3476' Mar 16 21:47:04.339: INFO: stderr: "" Mar 16 21:47:04.339: INFO: stdout: "e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff-w8fp8 " Mar 16 21:47:04.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff-w8fp8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3476' Mar 16 21:47:04.443: INFO: stderr: "" Mar 16 21:47:04.443: INFO: stdout: "true" Mar 16 21:47:04.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff-w8fp8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3476' Mar 16 21:47:04.534: INFO: stderr: "" Mar 16 21:47:04.534: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 16 21:47:04.534: INFO: e2e-test-httpd-rc-0c07ad46571791bbdab42f32c4c419ff-w8fp8 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1698 Mar 16 21:47:04.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3476' Mar 16 21:47:04.665: INFO: stderr: "" Mar 16 21:47:04.665: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:47:04.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3476" for this suite. • [SLOW TEST:16.595 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1687 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":143,"skipped":2543,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:47:04.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 16 21:47:04.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1278' Mar 16 21:47:05.299: INFO: stderr: "" Mar 16 21:47:05.299: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 16 21:47:06.303: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 21:47:06.303: INFO: Found 0 / 1 Mar 16 21:47:07.304: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 21:47:07.304: INFO: Found 0 / 1 Mar 16 21:47:08.304: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 21:47:08.304: INFO: Found 0 / 1 Mar 16 21:47:09.304: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 21:47:09.304: INFO: Found 1 / 1 Mar 16 21:47:09.304: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 16 21:47:09.307: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 21:47:09.307: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 21:47:09.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-67xr9 --namespace=kubectl-1278 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 16 21:47:09.438: INFO: stderr: "" Mar 16 21:47:09.439: INFO: stdout: "pod/agnhost-master-67xr9 patched\n" STEP: checking annotations Mar 16 21:47:09.442: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 21:47:09.442: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:47:09.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1278" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":144,"skipped":2552,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:47:09.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 16 21:47:09.487: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 21:47:09.510: INFO: Waiting for terminating namespaces to be deleted... Mar 16 21:47:09.512: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 16 21:47:09.518: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:47:09.518: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 21:47:09.518: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:47:09.518: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 21:47:09.518: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 16 21:47:09.523: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:47:09.523: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 21:47:09.523: INFO: agnhost-master-67xr9 from kubectl-1278 started at 2020-03-16 21:47:05 +0000 UTC (1 container statuses recorded) Mar 16 21:47:09.523: INFO: Container agnhost-master ready: true, restart count 0 Mar 16 21:47:09.523: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:47:09.523: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4e536bef-d175-437c-8b4a-930d7d00ce10 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-4e536bef-d175-437c-8b4a-930d7d00ce10 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-4e536bef-d175-437c-8b4a-930d7d00ce10 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:52:17.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6009" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.275 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":145,"skipped":2588,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:52:17.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:52:17.820: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"46f6715d-4fb4-4265-bc0b-9795d7c480d9", Controller:(*bool)(0xc002f87c32), BlockOwnerDeletion:(*bool)(0xc002f87c33)}} Mar 16 21:52:17.868: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b95b9435-eecd-426f-a4b1-d7d078d87499", Controller:(*bool)(0xc002d8ceca), BlockOwnerDeletion:(*bool)(0xc002d8cecb)}} Mar 16 21:52:17.874: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"81111844-1d2c-4582-a744-fad7e5db8ec9", Controller:(*bool)(0xc002d8d07a), BlockOwnerDeletion:(*bool)(0xc002d8d07b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:52:22.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2977" for this suite. • [SLOW TEST:5.455 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":146,"skipped":2589,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:52:23.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3843.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3843.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3843.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3843.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3843.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3843.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3843.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3843.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 148.171.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.171.148_udp@PTR;check="$$(dig +tcp +noall +answer +search 148.171.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.171.148_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3843.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3843.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3843.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3843.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3843.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3843.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3843.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3843.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3843.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 148.171.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.171.148_udp@PTR;check="$$(dig +tcp +noall +answer +search 148.171.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.171.148_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 21:52:29.674: INFO: Unable to read wheezy_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:29.678: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:29.681: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:29.683: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:29.704: INFO: Unable to read jessie_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:29.708: INFO: Unable to read jessie_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:29.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:29.713: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:29.731: INFO: Lookups using dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353 failed for: [wheezy_udp@dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_udp@dns-test-service.dns-3843.svc.cluster.local jessie_tcp@dns-test-service.dns-3843.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local] Mar 16 21:52:34.736: INFO: Unable to read wheezy_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:34.739: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:34.742: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:34.746: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:34.770: INFO: Unable to read jessie_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:34.772: INFO: Unable to read jessie_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:34.775: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:34.778: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:34.796: INFO: Lookups using dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353 failed for: [wheezy_udp@dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_udp@dns-test-service.dns-3843.svc.cluster.local jessie_tcp@dns-test-service.dns-3843.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local] Mar 16 21:52:39.736: INFO: Unable to read wheezy_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:39.740: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:39.744: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:39.747: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:39.766: INFO: Unable to read jessie_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:39.768: INFO: Unable to read jessie_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:39.772: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:39.775: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:39.791: INFO: Lookups using dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353 failed for: [wheezy_udp@dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_udp@dns-test-service.dns-3843.svc.cluster.local jessie_tcp@dns-test-service.dns-3843.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local] Mar 16 21:52:44.735: INFO: Unable to read wheezy_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:44.738: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:44.740: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:44.743: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:44.783: INFO: Unable to read jessie_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:44.786: INFO: Unable to read jessie_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:44.788: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:44.791: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:44.810: INFO: Lookups using dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353 failed for: [wheezy_udp@dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_udp@dns-test-service.dns-3843.svc.cluster.local jessie_tcp@dns-test-service.dns-3843.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local] Mar 16 21:52:49.735: INFO: Unable to read wheezy_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:49.739: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:49.742: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:49.745: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:49.772: INFO: Unable to read jessie_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:49.780: INFO: Unable to read jessie_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:49.787: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:49.791: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:49.816: INFO: Lookups using dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353 failed for: [wheezy_udp@dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_udp@dns-test-service.dns-3843.svc.cluster.local jessie_tcp@dns-test-service.dns-3843.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local] Mar 16 21:52:54.736: INFO: Unable to read wheezy_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:54.740: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:54.743: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:54.747: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:54.771: INFO: Unable to read jessie_udp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:54.774: INFO: Unable to read jessie_tcp@dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:54.776: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:54.780: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local from pod dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353: the server could not find the requested resource (get pods dns-test-a5de0174-3ff3-40af-9455-971229e19353) Mar 16 21:52:54.799: INFO: Lookups using dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353 failed for: [wheezy_udp@dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@dns-test-service.dns-3843.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_udp@dns-test-service.dns-3843.svc.cluster.local jessie_tcp@dns-test-service.dns-3843.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3843.svc.cluster.local] Mar 16 21:52:59.804: INFO: DNS probes using dns-3843/dns-test-a5de0174-3ff3-40af-9455-971229e19353 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:53:00.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3843" for this suite. • [SLOW TEST:37.181 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":147,"skipped":2621,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:53:00.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:53:00.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-578" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":148,"skipped":2707,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:53:00.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 16 21:53:00.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4800' Mar 16 21:53:03.649: INFO: stderr: "" Mar 16 21:53:03.649: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 21:53:03.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4800' Mar 16 21:53:03.746: INFO: stderr: "" Mar 16 21:53:03.746: INFO: stdout: "update-demo-nautilus-7rlds update-demo-nautilus-zxjck " Mar 16 21:53:03.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rlds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4800' Mar 16 21:53:03.827: INFO: stderr: "" Mar 16 21:53:03.827: INFO: stdout: "" Mar 16 21:53:03.827: INFO: update-demo-nautilus-7rlds is created but not running Mar 16 21:53:08.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4800' Mar 16 21:53:08.923: INFO: stderr: "" Mar 16 21:53:08.923: INFO: stdout: "update-demo-nautilus-7rlds update-demo-nautilus-zxjck " Mar 16 21:53:08.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rlds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4800' Mar 16 21:53:09.015: INFO: stderr: "" Mar 16 21:53:09.015: INFO: stdout: "true" Mar 16 21:53:09.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rlds -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4800' Mar 16 21:53:09.109: INFO: stderr: "" Mar 16 21:53:09.109: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 21:53:09.109: INFO: validating pod update-demo-nautilus-7rlds Mar 16 21:53:09.112: INFO: got data: { "image": "nautilus.jpg" } Mar 16 21:53:09.112: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 21:53:09.112: INFO: update-demo-nautilus-7rlds is verified up and running Mar 16 21:53:09.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxjck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4800' Mar 16 21:53:09.204: INFO: stderr: "" Mar 16 21:53:09.204: INFO: stdout: "true" Mar 16 21:53:09.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxjck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4800' Mar 16 21:53:09.313: INFO: stderr: "" Mar 16 21:53:09.313: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 21:53:09.313: INFO: validating pod update-demo-nautilus-zxjck Mar 16 21:53:09.318: INFO: got data: { "image": "nautilus.jpg" } Mar 16 21:53:09.318: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 21:53:09.318: INFO: update-demo-nautilus-zxjck is verified up and running STEP: rolling-update to new replication controller Mar 16 21:53:09.322: INFO: scanned /root for discovery docs: Mar 16 21:53:09.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4800' Mar 16 21:53:31.832: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 16 21:53:31.832: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 21:53:31.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4800' Mar 16 21:53:31.925: INFO: stderr: "" Mar 16 21:53:31.925: INFO: stdout: "update-demo-kitten-crlrc update-demo-kitten-fc7fb " Mar 16 21:53:31.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-crlrc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4800' Mar 16 21:53:32.012: INFO: stderr: "" Mar 16 21:53:32.012: INFO: stdout: "true" Mar 16 21:53:32.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-crlrc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4800' Mar 16 21:53:32.094: INFO: stderr: "" Mar 16 21:53:32.094: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 16 21:53:32.094: INFO: validating pod update-demo-kitten-crlrc Mar 16 21:53:32.098: INFO: got data: { "image": "kitten.jpg" } Mar 16 21:53:32.098: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 16 21:53:32.098: INFO: update-demo-kitten-crlrc is verified up and running Mar 16 21:53:32.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fc7fb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4800' Mar 16 21:53:32.190: INFO: stderr: "" Mar 16 21:53:32.190: INFO: stdout: "true" Mar 16 21:53:32.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fc7fb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4800' Mar 16 21:53:32.285: INFO: stderr: "" Mar 16 21:53:32.285: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 16 21:53:32.285: INFO: validating pod update-demo-kitten-fc7fb Mar 16 21:53:32.289: INFO: got data: { "image": "kitten.jpg" } Mar 16 21:53:32.289: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 16 21:53:32.289: INFO: update-demo-kitten-fc7fb is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:53:32.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4800" for this suite. • [SLOW TEST:31.786 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":149,"skipped":2726,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:53:32.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 16 21:53:36.419: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 16 21:53:51.506: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:53:51.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1292" for this suite. • [SLOW TEST:19.220 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":150,"skipped":2735,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:53:51.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:53:58.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8730" for this suite. • [SLOW TEST:7.051 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":151,"skipped":2754,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:53:58.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-ng95 STEP: Creating a pod to test atomic-volume-subpath Mar 16 21:53:58.650: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ng95" in namespace "subpath-3004" to be "success or failure" Mar 16 21:53:58.654: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Pending", Reason="", readiness=false. Elapsed: 3.821103ms Mar 16 21:54:00.658: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008052321s Mar 16 21:54:02.663: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Running", Reason="", readiness=true. Elapsed: 4.012426052s Mar 16 21:54:04.667: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Running", Reason="", readiness=true. Elapsed: 6.01671067s Mar 16 21:54:06.671: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Running", Reason="", readiness=true. Elapsed: 8.021005992s Mar 16 21:54:08.675: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Running", Reason="", readiness=true. Elapsed: 10.024841903s Mar 16 21:54:10.679: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Running", Reason="", readiness=true. Elapsed: 12.028601965s Mar 16 21:54:12.683: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Running", Reason="", readiness=true. Elapsed: 14.032965486s Mar 16 21:54:14.688: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Running", Reason="", readiness=true. Elapsed: 16.037397406s Mar 16 21:54:16.692: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Running", Reason="", readiness=true. Elapsed: 18.04171788s Mar 16 21:54:18.696: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Running", Reason="", readiness=true. Elapsed: 20.045546553s Mar 16 21:54:20.700: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Running", Reason="", readiness=true. Elapsed: 22.050026284s Mar 16 21:54:22.705: INFO: Pod "pod-subpath-test-configmap-ng95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054821344s STEP: Saw pod success Mar 16 21:54:22.705: INFO: Pod "pod-subpath-test-configmap-ng95" satisfied condition "success or failure" Mar 16 21:54:22.708: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-ng95 container test-container-subpath-configmap-ng95: STEP: delete the pod Mar 16 21:54:22.764: INFO: Waiting for pod pod-subpath-test-configmap-ng95 to disappear Mar 16 21:54:22.768: INFO: Pod pod-subpath-test-configmap-ng95 no longer exists STEP: Deleting pod pod-subpath-test-configmap-ng95 Mar 16 21:54:22.768: INFO: Deleting pod "pod-subpath-test-configmap-ng95" in namespace "subpath-3004" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:54:22.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3004" for this suite. • [SLOW TEST:24.209 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":152,"skipped":2761,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:54:22.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-sj6r STEP: Creating a pod to test atomic-volume-subpath Mar 16 21:54:22.842: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-sj6r" in namespace "subpath-8035" to be "success or failure" Mar 16 21:54:22.846: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164294ms Mar 16 21:54:24.849: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007647735s Mar 16 21:54:26.872: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Running", Reason="", readiness=true. Elapsed: 4.029821232s Mar 16 21:54:28.878: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Running", Reason="", readiness=true. Elapsed: 6.036147159s Mar 16 21:54:30.882: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Running", Reason="", readiness=true. Elapsed: 8.040582708s Mar 16 21:54:32.887: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Running", Reason="", readiness=true. Elapsed: 10.04501006s Mar 16 21:54:34.890: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Running", Reason="", readiness=true. Elapsed: 12.048697255s Mar 16 21:54:36.894: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Running", Reason="", readiness=true. Elapsed: 14.052417721s Mar 16 21:54:38.898: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Running", Reason="", readiness=true. Elapsed: 16.056493398s Mar 16 21:54:40.902: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Running", Reason="", readiness=true. Elapsed: 18.060675077s Mar 16 21:54:42.907: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Running", Reason="", readiness=true. Elapsed: 20.064842339s Mar 16 21:54:44.911: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Running", Reason="", readiness=true. Elapsed: 22.068860218s Mar 16 21:54:46.915: INFO: Pod "pod-subpath-test-downwardapi-sj6r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.073172564s STEP: Saw pod success Mar 16 21:54:46.915: INFO: Pod "pod-subpath-test-downwardapi-sj6r" satisfied condition "success or failure" Mar 16 21:54:46.918: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-sj6r container test-container-subpath-downwardapi-sj6r: STEP: delete the pod Mar 16 21:54:46.960: INFO: Waiting for pod pod-subpath-test-downwardapi-sj6r to disappear Mar 16 21:54:46.997: INFO: Pod pod-subpath-test-downwardapi-sj6r no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-sj6r Mar 16 21:54:46.997: INFO: Deleting pod "pod-subpath-test-downwardapi-sj6r" in namespace "subpath-8035" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:54:47.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8035" for this suite. • [SLOW TEST:24.232 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":153,"skipped":2777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:54:47.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 21:54:48.004: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 21:54:50.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992488, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992488, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992488, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992487, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:54:53.097: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 16 21:54:53.125: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:54:53.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2757" for this suite. STEP: Destroying namespace "webhook-2757-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.264 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":154,"skipped":2848,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:54:53.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:54:53.318: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 16 21:54:56.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4618 create -f -' Mar 16 21:54:59.092: INFO: stderr: "" Mar 16 21:54:59.092: INFO: stdout: "e2e-test-crd-publish-openapi-8681-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 16 21:54:59.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4618 delete e2e-test-crd-publish-openapi-8681-crds test-foo' Mar 16 21:54:59.222: INFO: stderr: "" Mar 16 21:54:59.222: INFO: stdout: "e2e-test-crd-publish-openapi-8681-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 16 21:54:59.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4618 apply -f -' Mar 16 21:54:59.455: INFO: stderr: "" Mar 16 21:54:59.455: INFO: stdout: "e2e-test-crd-publish-openapi-8681-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 16 21:54:59.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4618 delete e2e-test-crd-publish-openapi-8681-crds test-foo' Mar 16 21:54:59.574: INFO: stderr: "" Mar 16 21:54:59.574: INFO: stdout: "e2e-test-crd-publish-openapi-8681-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 16 21:54:59.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4618 create -f -' Mar 16 21:54:59.771: INFO: rc: 1 Mar 16 21:54:59.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4618 apply -f -' Mar 16 21:54:59.990: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 16 21:54:59.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4618 create -f -' Mar 16 21:55:00.214: INFO: rc: 1 Mar 16 21:55:00.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4618 apply -f -' Mar 16 21:55:00.431: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 16 21:55:00.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8681-crds' Mar 16 21:55:00.675: INFO: stderr: "" Mar 16 21:55:00.675: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8681-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 16 21:55:00.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8681-crds.metadata' Mar 16 21:55:00.929: INFO: stderr: "" Mar 16 21:55:00.929: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8681-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 16 21:55:00.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8681-crds.spec' Mar 16 21:55:01.157: INFO: stderr: "" Mar 16 21:55:01.157: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8681-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 16 21:55:01.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8681-crds.spec.bars' Mar 16 21:55:01.377: INFO: stderr: "" Mar 16 21:55:01.377: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8681-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 16 21:55:01.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8681-crds.spec.bars2' Mar 16 21:55:01.618: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:55:04.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4618" for this suite. • [SLOW TEST:11.243 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":155,"skipped":2850,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:55:04.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 16 21:55:04.615: INFO: Waiting up to 5m0s for pod "pod-63d80a7a-08c4-41d8-ac58-928584d1cbda" in namespace "emptydir-413" to be "success or failure" Mar 16 21:55:04.619: INFO: Pod "pod-63d80a7a-08c4-41d8-ac58-928584d1cbda": Phase="Pending", Reason="", readiness=false. Elapsed: 3.541627ms Mar 16 21:55:06.632: INFO: Pod "pod-63d80a7a-08c4-41d8-ac58-928584d1cbda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016567743s Mar 16 21:55:08.636: INFO: Pod "pod-63d80a7a-08c4-41d8-ac58-928584d1cbda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020533987s STEP: Saw pod success Mar 16 21:55:08.636: INFO: Pod "pod-63d80a7a-08c4-41d8-ac58-928584d1cbda" satisfied condition "success or failure" Mar 16 21:55:08.639: INFO: Trying to get logs from node jerma-worker pod pod-63d80a7a-08c4-41d8-ac58-928584d1cbda container test-container: STEP: delete the pod Mar 16 21:55:08.673: INFO: Waiting for pod pod-63d80a7a-08c4-41d8-ac58-928584d1cbda to disappear Mar 16 21:55:08.694: INFO: Pod pod-63d80a7a-08c4-41d8-ac58-928584d1cbda no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:55:08.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-413" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2876,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:55:08.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 16 21:55:08.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 16 21:55:08.916: INFO: stderr: "" Mar 16 21:55:08.916: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:55:08.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1549" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":157,"skipped":2888,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:55:08.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 16 21:55:08.985: INFO: Waiting up to 5m0s for pod "var-expansion-3600f314-ab24-43ce-9ee8-3ba4342df973" in namespace "var-expansion-1073" to be "success or failure" Mar 16 21:55:09.015: INFO: Pod "var-expansion-3600f314-ab24-43ce-9ee8-3ba4342df973": Phase="Pending", Reason="", readiness=false. Elapsed: 29.771507ms Mar 16 21:55:11.019: INFO: Pod "var-expansion-3600f314-ab24-43ce-9ee8-3ba4342df973": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033815628s Mar 16 21:55:13.023: INFO: Pod "var-expansion-3600f314-ab24-43ce-9ee8-3ba4342df973": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03766605s STEP: Saw pod success Mar 16 21:55:13.023: INFO: Pod "var-expansion-3600f314-ab24-43ce-9ee8-3ba4342df973" satisfied condition "success or failure" Mar 16 21:55:13.026: INFO: Trying to get logs from node jerma-worker pod var-expansion-3600f314-ab24-43ce-9ee8-3ba4342df973 container dapi-container: STEP: delete the pod Mar 16 21:55:13.052: INFO: Waiting for pod var-expansion-3600f314-ab24-43ce-9ee8-3ba4342df973 to disappear Mar 16 21:55:13.063: INFO: Pod var-expansion-3600f314-ab24-43ce-9ee8-3ba4342df973 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:55:13.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1073" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:55:13.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:55:13.124: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:55:19.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5325" for this suite. • [SLOW TEST:6.526 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":159,"skipped":2937,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:55:19.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 16 21:55:20.320: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 16 21:55:22.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992520, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992520, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992520, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992520, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 21:55:25.417: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:55:25.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:55:26.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7921" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.060 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":160,"skipped":2949,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:55:26.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 16 21:55:34.739: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 21:55:34.746: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 21:55:36.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 21:55:36.750: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 21:55:38.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 21:55:38.751: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 21:55:40.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 21:55:40.750: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:55:40.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4158" for this suite. • [SLOW TEST:14.108 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2953,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:55:40.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 16 21:55:40.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9765' Mar 16 21:55:41.127: INFO: stderr: "" Mar 16 21:55:41.127: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 21:55:41.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9765' Mar 16 21:55:41.231: INFO: stderr: "" Mar 16 21:55:41.231: INFO: stdout: "update-demo-nautilus-dr2lh update-demo-nautilus-rps7t " Mar 16 21:55:41.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr2lh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:55:41.321: INFO: stderr: "" Mar 16 21:55:41.321: INFO: stdout: "" Mar 16 21:55:41.321: INFO: update-demo-nautilus-dr2lh is created but not running Mar 16 21:55:46.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9765' Mar 16 21:55:46.420: INFO: stderr: "" Mar 16 21:55:46.420: INFO: stdout: "update-demo-nautilus-dr2lh update-demo-nautilus-rps7t " Mar 16 21:55:46.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr2lh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:55:46.523: INFO: stderr: "" Mar 16 21:55:46.523: INFO: stdout: "true" Mar 16 21:55:46.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr2lh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:55:46.625: INFO: stderr: "" Mar 16 21:55:46.626: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 21:55:46.626: INFO: validating pod update-demo-nautilus-dr2lh Mar 16 21:55:46.630: INFO: got data: { "image": "nautilus.jpg" } Mar 16 21:55:46.630: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 21:55:46.630: INFO: update-demo-nautilus-dr2lh is verified up and running Mar 16 21:55:46.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rps7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:55:46.725: INFO: stderr: "" Mar 16 21:55:46.725: INFO: stdout: "true" Mar 16 21:55:46.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rps7t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:55:46.819: INFO: stderr: "" Mar 16 21:55:46.819: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 21:55:46.819: INFO: validating pod update-demo-nautilus-rps7t Mar 16 21:55:46.822: INFO: got data: { "image": "nautilus.jpg" } Mar 16 21:55:46.822: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 21:55:46.822: INFO: update-demo-nautilus-rps7t is verified up and running STEP: scaling down the replication controller Mar 16 21:55:46.824: INFO: scanned /root for discovery docs: Mar 16 21:55:46.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9765' Mar 16 21:55:47.937: INFO: stderr: "" Mar 16 21:55:47.937: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 21:55:47.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9765' Mar 16 21:55:48.043: INFO: stderr: "" Mar 16 21:55:48.043: INFO: stdout: "update-demo-nautilus-dr2lh update-demo-nautilus-rps7t " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 16 21:55:53.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9765' Mar 16 21:55:53.145: INFO: stderr: "" Mar 16 21:55:53.145: INFO: stdout: "update-demo-nautilus-dr2lh update-demo-nautilus-rps7t " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 16 21:55:58.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9765' Mar 16 21:55:58.245: INFO: stderr: "" Mar 16 21:55:58.245: INFO: stdout: "update-demo-nautilus-dr2lh update-demo-nautilus-rps7t " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 16 21:56:03.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9765' Mar 16 21:56:03.352: INFO: stderr: "" Mar 16 21:56:03.352: INFO: stdout: "update-demo-nautilus-dr2lh " Mar 16 21:56:03.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr2lh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:56:03.444: INFO: stderr: "" Mar 16 21:56:03.444: INFO: stdout: "true" Mar 16 21:56:03.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr2lh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:56:03.535: INFO: stderr: "" Mar 16 21:56:03.535: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 21:56:03.535: INFO: validating pod update-demo-nautilus-dr2lh Mar 16 21:56:03.539: INFO: got data: { "image": "nautilus.jpg" } Mar 16 21:56:03.539: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 21:56:03.539: INFO: update-demo-nautilus-dr2lh is verified up and running STEP: scaling up the replication controller Mar 16 21:56:03.542: INFO: scanned /root for discovery docs: Mar 16 21:56:03.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9765' Mar 16 21:56:04.668: INFO: stderr: "" Mar 16 21:56:04.668: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 21:56:04.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9765' Mar 16 21:56:04.759: INFO: stderr: "" Mar 16 21:56:04.759: INFO: stdout: "update-demo-nautilus-4x7bg update-demo-nautilus-dr2lh " Mar 16 21:56:04.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4x7bg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:56:04.844: INFO: stderr: "" Mar 16 21:56:04.844: INFO: stdout: "" Mar 16 21:56:04.844: INFO: update-demo-nautilus-4x7bg is created but not running Mar 16 21:56:09.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9765' Mar 16 21:56:09.938: INFO: stderr: "" Mar 16 21:56:09.938: INFO: stdout: "update-demo-nautilus-4x7bg update-demo-nautilus-dr2lh " Mar 16 21:56:09.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4x7bg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:56:10.028: INFO: stderr: "" Mar 16 21:56:10.028: INFO: stdout: "true" Mar 16 21:56:10.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4x7bg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:56:10.117: INFO: stderr: "" Mar 16 21:56:10.117: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 21:56:10.117: INFO: validating pod update-demo-nautilus-4x7bg Mar 16 21:56:10.121: INFO: got data: { "image": "nautilus.jpg" } Mar 16 21:56:10.121: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 21:56:10.121: INFO: update-demo-nautilus-4x7bg is verified up and running Mar 16 21:56:10.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr2lh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:56:10.223: INFO: stderr: "" Mar 16 21:56:10.223: INFO: stdout: "true" Mar 16 21:56:10.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr2lh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9765' Mar 16 21:56:10.316: INFO: stderr: "" Mar 16 21:56:10.316: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 21:56:10.316: INFO: validating pod update-demo-nautilus-dr2lh Mar 16 21:56:10.320: INFO: got data: { "image": "nautilus.jpg" } Mar 16 21:56:10.320: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 21:56:10.320: INFO: update-demo-nautilus-dr2lh is verified up and running STEP: using delete to clean up resources Mar 16 21:56:10.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9765' Mar 16 21:56:10.421: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 21:56:10.421: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 16 21:56:10.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9765' Mar 16 21:56:10.519: INFO: stderr: "No resources found in kubectl-9765 namespace.\n" Mar 16 21:56:10.519: INFO: stdout: "" Mar 16 21:56:10.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9765 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 21:56:10.607: INFO: stderr: "" Mar 16 21:56:10.607: INFO: stdout: "update-demo-nautilus-4x7bg\nupdate-demo-nautilus-dr2lh\n" Mar 16 21:56:11.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9765' Mar 16 21:56:11.203: INFO: stderr: "No resources found in kubectl-9765 namespace.\n" Mar 16 21:56:11.203: INFO: stdout: "" Mar 16 21:56:11.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9765 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 21:56:11.296: INFO: stderr: "" Mar 16 21:56:11.296: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:56:11.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9765" for this suite. • [SLOW TEST:30.536 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":162,"skipped":2970,"failed":0} SSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:56:11.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 16 21:56:11.653: INFO: Created pod &Pod{ObjectMeta:{dns-8555 dns-8555 /api/v1/namespaces/dns-8555/pods/dns-8555 0ecbf31e-eb87-4d0c-af51-03ec5dd732de 329431 0 2020-03-16 21:56:11 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vntl4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vntl4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vntl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 16 21:56:15.671: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8555 PodName:dns-8555 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:56:15.671: INFO: >>> kubeConfig: /root/.kube/config I0316 21:56:15.711530 6 log.go:172] (0xc002ad0840) (0xc001355ae0) Create stream I0316 21:56:15.711566 6 log.go:172] (0xc002ad0840) (0xc001355ae0) Stream added, broadcasting: 1 I0316 21:56:15.714510 6 log.go:172] (0xc002ad0840) Reply frame received for 1 I0316 21:56:15.714543 6 log.go:172] (0xc002ad0840) (0xc0012b21e0) Create stream I0316 21:56:15.714555 6 log.go:172] (0xc002ad0840) (0xc0012b21e0) Stream added, broadcasting: 3 I0316 21:56:15.716595 6 log.go:172] (0xc002ad0840) Reply frame received for 3 I0316 21:56:15.716625 6 log.go:172] (0xc002ad0840) (0xc001467e00) Create stream I0316 21:56:15.716635 6 log.go:172] (0xc002ad0840) (0xc001467e00) Stream added, broadcasting: 5 I0316 21:56:15.718150 6 log.go:172] (0xc002ad0840) Reply frame received for 5 I0316 21:56:15.783340 6 log.go:172] (0xc002ad0840) Data frame received for 3 I0316 21:56:15.783365 6 log.go:172] (0xc0012b21e0) (3) Data frame handling I0316 21:56:15.783386 6 log.go:172] (0xc0012b21e0) (3) Data frame sent I0316 21:56:15.783856 6 log.go:172] (0xc002ad0840) Data frame received for 5 I0316 21:56:15.783889 6 log.go:172] (0xc001467e00) (5) Data frame handling I0316 21:56:15.784080 6 log.go:172] (0xc002ad0840) Data frame received for 3 I0316 21:56:15.784131 6 log.go:172] (0xc0012b21e0) (3) Data frame handling I0316 21:56:15.785852 6 log.go:172] (0xc002ad0840) Data frame received for 1 I0316 21:56:15.785892 6 log.go:172] (0xc001355ae0) (1) Data frame handling I0316 21:56:15.785943 6 log.go:172] (0xc001355ae0) (1) Data frame sent I0316 21:56:15.785965 6 log.go:172] (0xc002ad0840) (0xc001355ae0) Stream removed, broadcasting: 1 I0316 21:56:15.786012 6 log.go:172] (0xc002ad0840) Go away received I0316 21:56:15.786089 6 log.go:172] (0xc002ad0840) (0xc001355ae0) Stream removed, broadcasting: 1 I0316 21:56:15.786121 6 log.go:172] (0xc002ad0840) (0xc0012b21e0) Stream removed, broadcasting: 3 I0316 21:56:15.786145 6 log.go:172] (0xc002ad0840) (0xc001467e00) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 16 21:56:15.786: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8555 PodName:dns-8555 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 21:56:15.786: INFO: >>> kubeConfig: /root/.kube/config I0316 21:56:15.823759 6 log.go:172] (0xc005256bb0) (0xc001c4e820) Create stream I0316 21:56:15.823787 6 log.go:172] (0xc005256bb0) (0xc001c4e820) Stream added, broadcasting: 1 I0316 21:56:15.826356 6 log.go:172] (0xc005256bb0) Reply frame received for 1 I0316 21:56:15.826382 6 log.go:172] (0xc005256bb0) (0xc00101a640) Create stream I0316 21:56:15.826390 6 log.go:172] (0xc005256bb0) (0xc00101a640) Stream added, broadcasting: 3 I0316 21:56:15.827330 6 log.go:172] (0xc005256bb0) Reply frame received for 3 I0316 21:56:15.827385 6 log.go:172] (0xc005256bb0) (0xc00101a780) Create stream I0316 21:56:15.827402 6 log.go:172] (0xc005256bb0) (0xc00101a780) Stream added, broadcasting: 5 I0316 21:56:15.828363 6 log.go:172] (0xc005256bb0) Reply frame received for 5 I0316 21:56:15.894169 6 log.go:172] (0xc005256bb0) Data frame received for 3 I0316 21:56:15.894197 6 log.go:172] (0xc00101a640) (3) Data frame handling I0316 21:56:15.894217 6 log.go:172] (0xc00101a640) (3) Data frame sent I0316 21:56:15.894993 6 log.go:172] (0xc005256bb0) Data frame received for 5 I0316 21:56:15.895031 6 log.go:172] (0xc00101a780) (5) Data frame handling I0316 21:56:15.895203 6 log.go:172] (0xc005256bb0) Data frame received for 3 I0316 21:56:15.895231 6 log.go:172] (0xc00101a640) (3) Data frame handling I0316 21:56:15.896892 6 log.go:172] (0xc005256bb0) Data frame received for 1 I0316 21:56:15.896916 6 log.go:172] (0xc001c4e820) (1) Data frame handling I0316 21:56:15.896929 6 log.go:172] (0xc001c4e820) (1) Data frame sent I0316 21:56:15.896947 6 log.go:172] (0xc005256bb0) (0xc001c4e820) Stream removed, broadcasting: 1 I0316 21:56:15.896968 6 log.go:172] (0xc005256bb0) Go away received I0316 21:56:15.897055 6 log.go:172] (0xc005256bb0) (0xc001c4e820) Stream removed, broadcasting: 1 I0316 21:56:15.897074 6 log.go:172] (0xc005256bb0) (0xc00101a640) Stream removed, broadcasting: 3 I0316 21:56:15.897083 6 log.go:172] (0xc005256bb0) (0xc00101a780) Stream removed, broadcasting: 5 Mar 16 21:56:15.897: INFO: Deleting pod dns-8555... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:56:15.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8555" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":163,"skipped":2974,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:56:15.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 16 21:56:20.776: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2991 pod-service-account-ff59651f-b918-48af-a018-808df7b3636e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 16 21:56:21.003: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2991 pod-service-account-ff59651f-b918-48af-a018-808df7b3636e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 16 21:56:21.185: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2991 pod-service-account-ff59651f-b918-48af-a018-808df7b3636e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:56:21.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2991" for this suite. • [SLOW TEST:5.442 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":164,"skipped":2981,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:56:21.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1861 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 16 21:56:21.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5646' Mar 16 21:56:21.527: INFO: stderr: "" Mar 16 21:56:21.527: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1866 Mar 16 21:56:21.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5646' Mar 16 21:56:29.226: INFO: stderr: "" Mar 16 21:56:29.226: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:56:29.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5646" for this suite. • [SLOW TEST:7.854 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1857 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":165,"skipped":2994,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:56:29.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0316 21:57:09.576975 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 21:57:09.577: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:57:09.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2288" for this suite. • [SLOW TEST:40.353 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":166,"skipped":3000,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:57:09.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-00198237-e088-4feb-b55a-7e00534e404a STEP: Creating a pod to test consume secrets Mar 16 21:57:09.660: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e829999c-f3b6-4ee5-931c-5b2db3ee6d9f" in namespace "projected-3097" to be "success or failure" Mar 16 21:57:09.664: INFO: Pod "pod-projected-secrets-e829999c-f3b6-4ee5-931c-5b2db3ee6d9f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.894113ms Mar 16 21:57:11.668: INFO: Pod "pod-projected-secrets-e829999c-f3b6-4ee5-931c-5b2db3ee6d9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008481669s Mar 16 21:57:13.694: INFO: Pod "pod-projected-secrets-e829999c-f3b6-4ee5-931c-5b2db3ee6d9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034378654s STEP: Saw pod success Mar 16 21:57:13.694: INFO: Pod "pod-projected-secrets-e829999c-f3b6-4ee5-931c-5b2db3ee6d9f" satisfied condition "success or failure" Mar 16 21:57:13.697: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e829999c-f3b6-4ee5-931c-5b2db3ee6d9f container projected-secret-volume-test: STEP: delete the pod Mar 16 21:57:13.725: INFO: Waiting for pod pod-projected-secrets-e829999c-f3b6-4ee5-931c-5b2db3ee6d9f to disappear Mar 16 21:57:13.730: INFO: Pod pod-projected-secrets-e829999c-f3b6-4ee5-931c-5b2db3ee6d9f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:57:13.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3097" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":3008,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:57:13.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-9456 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9456 to expose endpoints map[] Mar 16 21:57:13.908: INFO: Get endpoints failed (66.369916ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 16 21:57:14.913: INFO: successfully validated that service endpoint-test2 in namespace services-9456 exposes endpoints map[] (1.072306966s elapsed) STEP: Creating pod pod1 in namespace services-9456 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9456 to expose endpoints map[pod1:[80]] Mar 16 21:57:19.173: INFO: successfully validated that service endpoint-test2 in namespace services-9456 exposes endpoints map[pod1:[80]] (4.231508719s elapsed) STEP: Creating pod pod2 in namespace services-9456 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9456 to expose endpoints map[pod1:[80] pod2:[80]] Mar 16 21:57:23.283: INFO: successfully validated that service endpoint-test2 in namespace services-9456 exposes endpoints map[pod1:[80] pod2:[80]] (4.104428731s elapsed) STEP: Deleting pod pod1 in namespace services-9456 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9456 to expose endpoints map[pod2:[80]] Mar 16 21:57:24.341: INFO: successfully validated that service endpoint-test2 in namespace services-9456 exposes endpoints map[pod2:[80]] (1.052736469s elapsed) STEP: Deleting pod pod2 in namespace services-9456 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9456 to expose endpoints map[] Mar 16 21:57:25.428: INFO: successfully validated that service endpoint-test2 in namespace services-9456 exposes endpoints map[] (1.074598962s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:57:25.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9456" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.802 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":168,"skipped":3066,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:57:25.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-db5b3757-00d2-4793-be9f-41e0c0acc853 STEP: Creating a pod to test consume configMaps Mar 16 21:57:25.635: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a89e04c5-fa0a-43ad-a960-6274fe72d154" in namespace "projected-3935" to be "success or failure" Mar 16 21:57:25.651: INFO: Pod "pod-projected-configmaps-a89e04c5-fa0a-43ad-a960-6274fe72d154": Phase="Pending", Reason="", readiness=false. Elapsed: 15.831712ms Mar 16 21:57:27.664: INFO: Pod "pod-projected-configmaps-a89e04c5-fa0a-43ad-a960-6274fe72d154": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029346041s Mar 16 21:57:29.669: INFO: Pod "pod-projected-configmaps-a89e04c5-fa0a-43ad-a960-6274fe72d154": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033755894s STEP: Saw pod success Mar 16 21:57:29.669: INFO: Pod "pod-projected-configmaps-a89e04c5-fa0a-43ad-a960-6274fe72d154" satisfied condition "success or failure" Mar 16 21:57:29.672: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-a89e04c5-fa0a-43ad-a960-6274fe72d154 container projected-configmap-volume-test: STEP: delete the pod Mar 16 21:57:29.694: INFO: Waiting for pod pod-projected-configmaps-a89e04c5-fa0a-43ad-a960-6274fe72d154 to disappear Mar 16 21:57:29.760: INFO: Pod pod-projected-configmaps-a89e04c5-fa0a-43ad-a960-6274fe72d154 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:57:29.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3935" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":3066,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:57:29.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:57:40.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2397" for this suite. • [SLOW TEST:11.157 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":170,"skipped":3068,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:57:40.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-87b58604-e359-4a79-a7bc-b9488b99a8e9 STEP: Creating a pod to test consume secrets Mar 16 21:57:40.989: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3bce8b47-81b4-4b18-be7b-640996d96a3d" in namespace "projected-9159" to be "success or failure" Mar 16 21:57:40.992: INFO: Pod "pod-projected-secrets-3bce8b47-81b4-4b18-be7b-640996d96a3d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.520302ms Mar 16 21:57:42.996: INFO: Pod "pod-projected-secrets-3bce8b47-81b4-4b18-be7b-640996d96a3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007792845s Mar 16 21:57:45.002: INFO: Pod "pod-projected-secrets-3bce8b47-81b4-4b18-be7b-640996d96a3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013485972s STEP: Saw pod success Mar 16 21:57:45.002: INFO: Pod "pod-projected-secrets-3bce8b47-81b4-4b18-be7b-640996d96a3d" satisfied condition "success or failure" Mar 16 21:57:45.006: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-3bce8b47-81b4-4b18-be7b-640996d96a3d container projected-secret-volume-test: STEP: delete the pod Mar 16 21:57:45.059: INFO: Waiting for pod pod-projected-secrets-3bce8b47-81b4-4b18-be7b-640996d96a3d to disappear Mar 16 21:57:45.070: INFO: Pod pod-projected-secrets-3bce8b47-81b4-4b18-be7b-640996d96a3d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:57:45.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9159" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":3108,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:57:45.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0316 21:58:15.681497 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 21:58:15.681: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:58:15.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4386" for this suite. • [SLOW TEST:30.610 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":172,"skipped":3115,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:58:15.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 16 21:58:15.763: INFO: Waiting up to 5m0s for pod "downward-api-e9943498-217e-44cf-b744-c7484ab471e0" in namespace "downward-api-5975" to be "success or failure" Mar 16 21:58:15.772: INFO: Pod "downward-api-e9943498-217e-44cf-b744-c7484ab471e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512874ms Mar 16 21:58:17.776: INFO: Pod "downward-api-e9943498-217e-44cf-b744-c7484ab471e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012872574s Mar 16 21:58:19.781: INFO: Pod "downward-api-e9943498-217e-44cf-b744-c7484ab471e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017635662s STEP: Saw pod success Mar 16 21:58:19.781: INFO: Pod "downward-api-e9943498-217e-44cf-b744-c7484ab471e0" satisfied condition "success or failure" Mar 16 21:58:19.784: INFO: Trying to get logs from node jerma-worker pod downward-api-e9943498-217e-44cf-b744-c7484ab471e0 container dapi-container: STEP: delete the pod Mar 16 21:58:19.823: INFO: Waiting for pod downward-api-e9943498-217e-44cf-b744-c7484ab471e0 to disappear Mar 16 21:58:19.838: INFO: Pod downward-api-e9943498-217e-44cf-b744-c7484ab471e0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:58:19.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5975" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":3119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:58:19.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:58:19.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 16 21:58:20.587: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T21:58:20Z generation:1 name:name1 resourceVersion:330307 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f9309040-6fb1-4da6-bacf-20194559ea84] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 16 21:58:30.606: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T21:58:30Z generation:1 name:name2 resourceVersion:330365 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:47f73beb-6dcf-4ff7-ba26-ae090f86ba3c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 16 21:58:40.612: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T21:58:20Z generation:2 name:name1 resourceVersion:330394 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f9309040-6fb1-4da6-bacf-20194559ea84] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 16 21:58:50.618: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T21:58:30Z generation:2 name:name2 resourceVersion:330424 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:47f73beb-6dcf-4ff7-ba26-ae090f86ba3c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 16 21:59:00.625: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T21:58:20Z generation:2 name:name1 resourceVersion:330454 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f9309040-6fb1-4da6-bacf-20194559ea84] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 16 21:59:10.632: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T21:58:30Z generation:2 name:name2 resourceVersion:330482 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:47f73beb-6dcf-4ff7-ba26-ae090f86ba3c] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:59:21.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-385" for this suite. • [SLOW TEST:61.288 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":174,"skipped":3143,"failed":0} [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:59:21.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 16 21:59:21.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 16 21:59:21.299: INFO: stderr: "" Mar 16 21:59:21.299: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:59:21.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9846" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":175,"skipped":3143,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:59:21.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 16 21:59:21.356: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 21:59:21.390: INFO: Waiting for terminating namespaces to be deleted... Mar 16 21:59:21.393: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 16 21:59:21.400: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:59:21.400: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 21:59:21.400: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:59:21.400: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 21:59:21.400: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 16 21:59:21.416: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:59:21.416: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 21:59:21.416: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 21:59:21.416: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Mar 16 21:59:21.475: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Mar 16 21:59:21.475: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Mar 16 21:59:21.475: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Mar 16 21:59:21.475: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 16 21:59:21.475: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Mar 16 21:59:21.480: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-0e9f7a65-f598-4f5b-81b9-4c3e4764168e.15fce7bb857ba8ba], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5830/filler-pod-0e9f7a65-f598-4f5b-81b9-4c3e4764168e to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-0e9f7a65-f598-4f5b-81b9-4c3e4764168e.15fce7bbe20814ae], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-0e9f7a65-f598-4f5b-81b9-4c3e4764168e.15fce7bc24a278a6], Reason = [Created], Message = [Created container filler-pod-0e9f7a65-f598-4f5b-81b9-4c3e4764168e] STEP: Considering event: Type = [Normal], Name = [filler-pod-0e9f7a65-f598-4f5b-81b9-4c3e4764168e.15fce7bc318e3309], Reason = [Started], Message = [Started container filler-pod-0e9f7a65-f598-4f5b-81b9-4c3e4764168e] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c87d61a-c8e2-491d-a994-7b36909327b0.15fce7bb850af8db], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5830/filler-pod-6c87d61a-c8e2-491d-a994-7b36909327b0 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c87d61a-c8e2-491d-a994-7b36909327b0.15fce7bbcaa79758], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c87d61a-c8e2-491d-a994-7b36909327b0.15fce7bc1a3d75fb], Reason = [Created], Message = [Created container filler-pod-6c87d61a-c8e2-491d-a994-7b36909327b0] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c87d61a-c8e2-491d-a994-7b36909327b0.15fce7bc2a087320], Reason = [Started], Message = [Started container filler-pod-6c87d61a-c8e2-491d-a994-7b36909327b0] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fce7bc74f94387], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:59:26.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5830" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.303 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":176,"skipped":3154,"failed":0} SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:59:26.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 16 21:59:30.716: INFO: Pod pod-hostip-a47fa34a-0014-492e-b1e5-d1154ac524ef has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:59:30.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1623" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":3157,"failed":0} ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:59:30.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 21:59:54.842: INFO: Container started at 2020-03-16 21:59:33 +0000 UTC, pod became ready at 2020-03-16 21:59:53 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 21:59:54.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3272" for this suite. • [SLOW TEST:24.125 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":3157,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 21:59:54.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1415 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1415 I0316 21:59:55.023713 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1415, replica count: 2 I0316 21:59:58.074286 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 22:00:01.074556 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 22:00:01.074: INFO: Creating new exec pod Mar 16 22:00:06.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1415 execpodxktxr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 16 22:00:06.320: INFO: stderr: "I0316 22:00:06.223399 3089 log.go:172] (0xc00002b080) (0xc0009001e0) Create stream\nI0316 22:00:06.223461 3089 log.go:172] (0xc00002b080) (0xc0009001e0) Stream added, broadcasting: 1\nI0316 22:00:06.226691 3089 log.go:172] (0xc00002b080) Reply frame received for 1\nI0316 22:00:06.226728 3089 log.go:172] (0xc00002b080) (0xc0006619a0) Create stream\nI0316 22:00:06.226738 3089 log.go:172] (0xc00002b080) (0xc0006619a0) Stream added, broadcasting: 3\nI0316 22:00:06.227906 3089 log.go:172] (0xc00002b080) Reply frame received for 3\nI0316 22:00:06.227950 3089 log.go:172] (0xc00002b080) (0xc00035b360) Create stream\nI0316 22:00:06.227973 3089 log.go:172] (0xc00002b080) (0xc00035b360) Stream added, broadcasting: 5\nI0316 22:00:06.228886 3089 log.go:172] (0xc00002b080) Reply frame received for 5\nI0316 22:00:06.313649 3089 log.go:172] (0xc00002b080) Data frame received for 5\nI0316 22:00:06.313704 3089 log.go:172] (0xc00035b360) (5) Data frame handling\nI0316 22:00:06.313752 3089 log.go:172] (0xc00035b360) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0316 22:00:06.313929 3089 log.go:172] (0xc00002b080) Data frame received for 5\nI0316 22:00:06.313975 3089 log.go:172] (0xc00035b360) (5) Data frame handling\nI0316 22:00:06.314010 3089 log.go:172] (0xc00035b360) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0316 22:00:06.314341 3089 log.go:172] (0xc00002b080) Data frame received for 3\nI0316 22:00:06.314366 3089 log.go:172] (0xc0006619a0) (3) Data frame handling\nI0316 22:00:06.314693 3089 log.go:172] (0xc00002b080) Data frame received for 5\nI0316 22:00:06.314725 3089 log.go:172] (0xc00035b360) (5) Data frame handling\nI0316 22:00:06.316376 3089 log.go:172] (0xc00002b080) Data frame received for 1\nI0316 22:00:06.316408 3089 log.go:172] (0xc0009001e0) (1) Data frame handling\nI0316 22:00:06.316426 3089 log.go:172] (0xc0009001e0) (1) Data frame sent\nI0316 22:00:06.316442 3089 log.go:172] (0xc00002b080) (0xc0009001e0) Stream removed, broadcasting: 1\nI0316 22:00:06.316467 3089 log.go:172] (0xc00002b080) Go away received\nI0316 22:00:06.316855 3089 log.go:172] (0xc00002b080) (0xc0009001e0) Stream removed, broadcasting: 1\nI0316 22:00:06.316870 3089 log.go:172] (0xc00002b080) (0xc0006619a0) Stream removed, broadcasting: 3\nI0316 22:00:06.316877 3089 log.go:172] (0xc00002b080) (0xc00035b360) Stream removed, broadcasting: 5\n" Mar 16 22:00:06.321: INFO: stdout: "" Mar 16 22:00:06.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1415 execpodxktxr -- /bin/sh -x -c nc -zv -t -w 2 10.102.0.51 80' Mar 16 22:00:06.551: INFO: stderr: "I0316 22:00:06.480165 3112 log.go:172] (0xc00087a6e0) (0xc0005b1ea0) Create stream\nI0316 22:00:06.480225 3112 log.go:172] (0xc00087a6e0) (0xc0005b1ea0) Stream added, broadcasting: 1\nI0316 22:00:06.482875 3112 log.go:172] (0xc00087a6e0) Reply frame received for 1\nI0316 22:00:06.482922 3112 log.go:172] (0xc00087a6e0) (0xc0004ee780) Create stream\nI0316 22:00:06.482935 3112 log.go:172] (0xc00087a6e0) (0xc0004ee780) Stream added, broadcasting: 3\nI0316 22:00:06.484122 3112 log.go:172] (0xc00087a6e0) Reply frame received for 3\nI0316 22:00:06.484148 3112 log.go:172] (0xc00087a6e0) (0xc00097a000) Create stream\nI0316 22:00:06.484160 3112 log.go:172] (0xc00087a6e0) (0xc00097a000) Stream added, broadcasting: 5\nI0316 22:00:06.485395 3112 log.go:172] (0xc00087a6e0) Reply frame received for 5\nI0316 22:00:06.544432 3112 log.go:172] (0xc00087a6e0) Data frame received for 3\nI0316 22:00:06.544462 3112 log.go:172] (0xc0004ee780) (3) Data frame handling\nI0316 22:00:06.544487 3112 log.go:172] (0xc00087a6e0) Data frame received for 5\nI0316 22:00:06.544499 3112 log.go:172] (0xc00097a000) (5) Data frame handling\nI0316 22:00:06.544511 3112 log.go:172] (0xc00097a000) (5) Data frame sent\nI0316 22:00:06.544523 3112 log.go:172] (0xc00087a6e0) Data frame received for 5\nI0316 22:00:06.544533 3112 log.go:172] (0xc00097a000) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.0.51 80\nConnection to 10.102.0.51 80 port [tcp/http] succeeded!\nI0316 22:00:06.546378 3112 log.go:172] (0xc00087a6e0) Data frame received for 1\nI0316 22:00:06.546417 3112 log.go:172] (0xc0005b1ea0) (1) Data frame handling\nI0316 22:00:06.546439 3112 log.go:172] (0xc0005b1ea0) (1) Data frame sent\nI0316 22:00:06.546458 3112 log.go:172] (0xc00087a6e0) (0xc0005b1ea0) Stream removed, broadcasting: 1\nI0316 22:00:06.546491 3112 log.go:172] (0xc00087a6e0) Go away received\nI0316 22:00:06.547072 3112 log.go:172] (0xc00087a6e0) (0xc0005b1ea0) Stream removed, broadcasting: 1\nI0316 22:00:06.547099 3112 log.go:172] (0xc00087a6e0) (0xc0004ee780) Stream removed, broadcasting: 3\nI0316 22:00:06.547112 3112 log.go:172] (0xc00087a6e0) (0xc00097a000) Stream removed, broadcasting: 5\n" Mar 16 22:00:06.551: INFO: stdout: "" Mar 16 22:00:06.551: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:00:06.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1415" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.739 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":179,"skipped":3165,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:00:06.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 22:00:10.690: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:00:10.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3766" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3177,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:00:10.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 22:00:10.776: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 16 22:00:12.994: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:00:14.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7783" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":181,"skipped":3181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:00:14.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-8540/configmap-test-345b6755-39b0-4f14-a433-9353a9337554 STEP: Creating a pod to test consume configMaps Mar 16 22:00:14.621: INFO: Waiting up to 5m0s for pod "pod-configmaps-08f1958f-af1a-4187-8f3e-9ea32c09fec9" in namespace "configmap-8540" to be "success or failure" Mar 16 22:00:14.711: INFO: Pod "pod-configmaps-08f1958f-af1a-4187-8f3e-9ea32c09fec9": Phase="Pending", Reason="", readiness=false. Elapsed: 89.383137ms Mar 16 22:00:16.738: INFO: Pod "pod-configmaps-08f1958f-af1a-4187-8f3e-9ea32c09fec9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116694466s Mar 16 22:00:18.742: INFO: Pod "pod-configmaps-08f1958f-af1a-4187-8f3e-9ea32c09fec9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120862058s STEP: Saw pod success Mar 16 22:00:18.742: INFO: Pod "pod-configmaps-08f1958f-af1a-4187-8f3e-9ea32c09fec9" satisfied condition "success or failure" Mar 16 22:00:18.746: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-08f1958f-af1a-4187-8f3e-9ea32c09fec9 container env-test: STEP: delete the pod Mar 16 22:00:18.763: INFO: Waiting for pod pod-configmaps-08f1958f-af1a-4187-8f3e-9ea32c09fec9 to disappear Mar 16 22:00:18.768: INFO: Pod pod-configmaps-08f1958f-af1a-4187-8f3e-9ea32c09fec9 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:00:18.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8540" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3239,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:00:18.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 16 22:00:18.843: INFO: >>> kubeConfig: /root/.kube/config Mar 16 22:00:21.761: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:00:31.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7373" for this suite. • [SLOW TEST:12.310 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":183,"skipped":3254,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:00:31.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 22:00:31.619: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 22:00:33.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992831, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992831, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992831, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992831, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 22:00:35.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992831, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992831, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992831, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992831, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 22:00:38.655: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:00:38.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1039" for this suite. STEP: Destroying namespace "webhook-1039-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.738 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":184,"skipped":3261,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:00:38.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-6cdg STEP: Creating a pod to test atomic-volume-subpath Mar 16 22:00:38.903: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6cdg" in namespace "subpath-6574" to be "success or failure" Mar 16 22:00:38.906: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.508981ms Mar 16 22:00:40.910: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006826248s Mar 16 22:00:42.914: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Running", Reason="", readiness=true. Elapsed: 4.011069181s Mar 16 22:00:44.918: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Running", Reason="", readiness=true. Elapsed: 6.015300421s Mar 16 22:00:46.923: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Running", Reason="", readiness=true. Elapsed: 8.019700566s Mar 16 22:00:48.927: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Running", Reason="", readiness=true. Elapsed: 10.024096278s Mar 16 22:00:50.931: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Running", Reason="", readiness=true. Elapsed: 12.028301111s Mar 16 22:00:52.935: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Running", Reason="", readiness=true. Elapsed: 14.031842799s Mar 16 22:00:54.939: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Running", Reason="", readiness=true. Elapsed: 16.03609133s Mar 16 22:00:56.943: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Running", Reason="", readiness=true. Elapsed: 18.040351447s Mar 16 22:00:58.947: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Running", Reason="", readiness=true. Elapsed: 20.044061962s Mar 16 22:01:00.951: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Running", Reason="", readiness=true. Elapsed: 22.048060359s Mar 16 22:01:02.956: INFO: Pod "pod-subpath-test-configmap-6cdg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052793066s STEP: Saw pod success Mar 16 22:01:02.956: INFO: Pod "pod-subpath-test-configmap-6cdg" satisfied condition "success or failure" Mar 16 22:01:02.959: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-6cdg container test-container-subpath-configmap-6cdg: STEP: delete the pod Mar 16 22:01:02.982: INFO: Waiting for pod pod-subpath-test-configmap-6cdg to disappear Mar 16 22:01:03.032: INFO: Pod pod-subpath-test-configmap-6cdg no longer exists STEP: Deleting pod pod-subpath-test-configmap-6cdg Mar 16 22:01:03.032: INFO: Deleting pod "pod-subpath-test-configmap-6cdg" in namespace "subpath-6574" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:01:03.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6574" for this suite. • [SLOW TEST:24.218 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":185,"skipped":3262,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:01:03.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1632 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 16 22:01:03.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2620' Mar 16 22:01:03.215: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 16 22:01:03.215: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 16 22:01:03.256: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-s6h2t] Mar 16 22:01:03.256: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-s6h2t" in namespace "kubectl-2620" to be "running and ready" Mar 16 22:01:03.268: INFO: Pod "e2e-test-httpd-rc-s6h2t": Phase="Pending", Reason="", readiness=false. Elapsed: 11.60813ms Mar 16 22:01:05.271: INFO: Pod "e2e-test-httpd-rc-s6h2t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015272597s Mar 16 22:01:07.276: INFO: Pod "e2e-test-httpd-rc-s6h2t": Phase="Running", Reason="", readiness=true. Elapsed: 4.019543597s Mar 16 22:01:07.276: INFO: Pod "e2e-test-httpd-rc-s6h2t" satisfied condition "running and ready" Mar 16 22:01:07.276: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-s6h2t] Mar 16 22:01:07.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-2620' Mar 16 22:01:07.386: INFO: stderr: "" Mar 16 22:01:07.386: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.130. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.130. Set the 'ServerName' directive globally to suppress this message\n[Mon Mar 16 22:01:05.235190 2020] [mpm_event:notice] [pid 1:tid 140147946257256] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Mar 16 22:01:05.235239 2020] [core:notice] [pid 1:tid 140147946257256] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1637 Mar 16 22:01:07.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2620' Mar 16 22:01:07.486: INFO: stderr: "" Mar 16 22:01:07.486: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:01:07.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2620" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":186,"skipped":3264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:01:07.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 22:01:08.056: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 22:01:10.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992868, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992868, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992868, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719992868, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 22:01:13.185: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 22:01:13.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:01:14.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9872" for this suite. STEP: Destroying namespace "webhook-9872-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.980 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":187,"skipped":3301,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:01:14.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8090.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8090.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 22:01:20.613: INFO: DNS probes using dns-8090/dns-test-4a4c003b-0c23-410a-bbef-8cc55ec3f794 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:01:20.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8090" for this suite. • [SLOW TEST:6.243 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":188,"skipped":3306,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:01:20.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 22:01:20.982: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c9737f8-10a4-40c4-9785-c620b40902df" in namespace "projected-9879" to be "success or failure" Mar 16 22:01:20.992: INFO: Pod "downwardapi-volume-6c9737f8-10a4-40c4-9785-c620b40902df": Phase="Pending", Reason="", readiness=false. Elapsed: 9.534319ms Mar 16 22:01:23.035: INFO: Pod "downwardapi-volume-6c9737f8-10a4-40c4-9785-c620b40902df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052314895s Mar 16 22:01:25.038: INFO: Pod "downwardapi-volume-6c9737f8-10a4-40c4-9785-c620b40902df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055521077s STEP: Saw pod success Mar 16 22:01:25.038: INFO: Pod "downwardapi-volume-6c9737f8-10a4-40c4-9785-c620b40902df" satisfied condition "success or failure" Mar 16 22:01:25.039: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6c9737f8-10a4-40c4-9785-c620b40902df container client-container: STEP: delete the pod Mar 16 22:01:25.075: INFO: Waiting for pod downwardapi-volume-6c9737f8-10a4-40c4-9785-c620b40902df to disappear Mar 16 22:01:25.118: INFO: Pod downwardapi-volume-6c9737f8-10a4-40c4-9785-c620b40902df no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:01:25.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9879" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3327,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:01:25.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 16 22:01:26.089: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2678 /api/v1/namespaces/watch-2678/configmaps/e2e-watch-test-resource-version 603b7a51-6f32-49de-875a-631e0a159932 331457 0 2020-03-16 22:01:26 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 16 22:01:26.089: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2678 /api/v1/namespaces/watch-2678/configmaps/e2e-watch-test-resource-version 603b7a51-6f32-49de-875a-631e0a159932 331459 0 2020-03-16 22:01:26 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:01:26.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2678" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":190,"skipped":3331,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:01:26.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-8c37bf09-7dd1-4ef5-b955-9626275d123d STEP: Creating a pod to test consume secrets Mar 16 22:01:26.221: INFO: Waiting up to 5m0s for pod "pod-secrets-da2d8e60-85cd-4aa6-81ad-d2e1900d7d30" in namespace "secrets-4801" to be "success or failure" Mar 16 22:01:26.248: INFO: Pod "pod-secrets-da2d8e60-85cd-4aa6-81ad-d2e1900d7d30": Phase="Pending", Reason="", readiness=false. Elapsed: 27.565881ms Mar 16 22:01:28.252: INFO: Pod "pod-secrets-da2d8e60-85cd-4aa6-81ad-d2e1900d7d30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031081732s Mar 16 22:01:30.272: INFO: Pod "pod-secrets-da2d8e60-85cd-4aa6-81ad-d2e1900d7d30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051032303s STEP: Saw pod success Mar 16 22:01:30.272: INFO: Pod "pod-secrets-da2d8e60-85cd-4aa6-81ad-d2e1900d7d30" satisfied condition "success or failure" Mar 16 22:01:30.274: INFO: Trying to get logs from node jerma-worker pod pod-secrets-da2d8e60-85cd-4aa6-81ad-d2e1900d7d30 container secret-volume-test: STEP: delete the pod Mar 16 22:01:30.299: INFO: Waiting for pod pod-secrets-da2d8e60-85cd-4aa6-81ad-d2e1900d7d30 to disappear Mar 16 22:01:30.333: INFO: Pod pod-secrets-da2d8e60-85cd-4aa6-81ad-d2e1900d7d30 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:01:30.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4801" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3345,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:01:30.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 16 22:01:30.451: INFO: Waiting up to 5m0s for pod "pod-f2daf620-c78d-4872-9999-7699cbe48b7e" in namespace "emptydir-9898" to be "success or failure" Mar 16 22:01:30.456: INFO: Pod "pod-f2daf620-c78d-4872-9999-7699cbe48b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.031353ms Mar 16 22:01:32.459: INFO: Pod "pod-f2daf620-c78d-4872-9999-7699cbe48b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008803968s Mar 16 22:01:34.463: INFO: Pod "pod-f2daf620-c78d-4872-9999-7699cbe48b7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012192126s STEP: Saw pod success Mar 16 22:01:34.463: INFO: Pod "pod-f2daf620-c78d-4872-9999-7699cbe48b7e" satisfied condition "success or failure" Mar 16 22:01:34.466: INFO: Trying to get logs from node jerma-worker2 pod pod-f2daf620-c78d-4872-9999-7699cbe48b7e container test-container: STEP: delete the pod Mar 16 22:01:34.480: INFO: Waiting for pod pod-f2daf620-c78d-4872-9999-7699cbe48b7e to disappear Mar 16 22:01:34.535: INFO: Pod pod-f2daf620-c78d-4872-9999-7699cbe48b7e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:01:34.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9898" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:01:34.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:01:45.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9847" for this suite. • [SLOW TEST:11.097 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":193,"skipped":3391,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:01:45.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0316 22:01:47.022134 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 22:01:47.022: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:01:47.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2091" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":194,"skipped":3407,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:01:47.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1642ed86-59db-414e-bece-fb855b14c36a STEP: Creating a pod to test consume secrets Mar 16 22:01:47.242: INFO: Waiting up to 5m0s for pod "pod-secrets-9e45c370-e369-46b0-87d7-3868b99a319d" in namespace "secrets-9109" to be "success or failure" Mar 16 22:01:47.245: INFO: Pod "pod-secrets-9e45c370-e369-46b0-87d7-3868b99a319d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.02005ms Mar 16 22:01:49.249: INFO: Pod "pod-secrets-9e45c370-e369-46b0-87d7-3868b99a319d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006966416s Mar 16 22:01:51.254: INFO: Pod "pod-secrets-9e45c370-e369-46b0-87d7-3868b99a319d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011793112s STEP: Saw pod success Mar 16 22:01:51.254: INFO: Pod "pod-secrets-9e45c370-e369-46b0-87d7-3868b99a319d" satisfied condition "success or failure" Mar 16 22:01:51.258: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-9e45c370-e369-46b0-87d7-3868b99a319d container secret-volume-test: STEP: delete the pod Mar 16 22:01:51.297: INFO: Waiting for pod pod-secrets-9e45c370-e369-46b0-87d7-3868b99a319d to disappear Mar 16 22:01:51.316: INFO: Pod pod-secrets-9e45c370-e369-46b0-87d7-3868b99a319d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:01:51.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9109" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3409,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:01:51.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-st4qf in namespace proxy-9272 I0316 22:01:51.438495 6 runners.go:189] Created replication controller with name: proxy-service-st4qf, namespace: proxy-9272, replica count: 1 I0316 22:01:52.488911 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 22:01:53.489276 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 22:01:54.489638 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 22:01:55.489891 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 22:01:56.490148 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 22:01:57.490393 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 22:01:58.490624 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 22:01:59.490918 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 22:02:00.491183 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 22:02:01.491457 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 22:02:02.491753 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 22:02:03.492010 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 22:02:04.492275 6 runners.go:189] proxy-service-st4qf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 22:02:04.496: INFO: setup took 13.127051873s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 16 22:02:04.503: INFO: (0) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 7.149245ms) Mar 16 22:02:04.503: INFO: (0) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 7.622189ms) Mar 16 22:02:04.505: INFO: (0) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 9.404597ms) Mar 16 22:02:04.507: INFO: (0) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 11.402519ms) Mar 16 22:02:04.507: INFO: (0) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 11.401565ms) Mar 16 22:02:04.508: INFO: (0) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 11.549323ms) Mar 16 22:02:04.508: INFO: (0) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 11.795606ms) Mar 16 22:02:04.509: INFO: (0) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 12.786841ms) Mar 16 22:02:04.509: INFO: (0) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 12.524925ms) Mar 16 22:02:04.509: INFO: (0) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 12.765123ms) Mar 16 22:02:04.509: INFO: (0) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 12.572326ms) Mar 16 22:02:04.512: INFO: (0) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 15.719797ms) Mar 16 22:02:04.512: INFO: (0) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 15.988545ms) Mar 16 22:02:04.512: INFO: (0) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 16.029949ms) Mar 16 22:02:04.512: INFO: (0) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 16.05542ms) Mar 16 22:02:04.514: INFO: (0) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test<... (200; 3.2616ms) Mar 16 22:02:04.518: INFO: (1) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 3.537043ms) Mar 16 22:02:04.518: INFO: (1) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 3.704821ms) Mar 16 22:02:04.518: INFO: (1) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 3.608015ms) Mar 16 22:02:04.518: INFO: (1) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 3.748136ms) Mar 16 22:02:04.518: INFO: (1) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 3.802504ms) Mar 16 22:02:04.518: INFO: (1) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 3.756572ms) Mar 16 22:02:04.518: INFO: (1) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test (200; 4.00927ms) Mar 16 22:02:04.518: INFO: (1) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 4.080525ms) Mar 16 22:02:04.519: INFO: (1) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 4.474717ms) Mar 16 22:02:04.519: INFO: (1) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 4.587512ms) Mar 16 22:02:04.519: INFO: (1) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 4.633866ms) Mar 16 22:02:04.519: INFO: (1) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 4.649816ms) Mar 16 22:02:04.519: INFO: (1) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 5.221823ms) Mar 16 22:02:04.522: INFO: (2) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test<... (200; 6.392495ms) Mar 16 22:02:04.527: INFO: (2) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 7.243482ms) Mar 16 22:02:04.528: INFO: (2) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 8.692074ms) Mar 16 22:02:04.528: INFO: (2) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 8.7469ms) Mar 16 22:02:04.528: INFO: (2) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 8.807171ms) Mar 16 22:02:04.528: INFO: (2) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 8.877947ms) Mar 16 22:02:04.528: INFO: (2) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 8.82347ms) Mar 16 22:02:04.528: INFO: (2) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 8.968226ms) Mar 16 22:02:04.529: INFO: (2) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 9.144015ms) Mar 16 22:02:04.529: INFO: (2) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 9.478089ms) Mar 16 22:02:04.529: INFO: (2) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 9.508326ms) Mar 16 22:02:04.529: INFO: (2) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 9.619808ms) Mar 16 22:02:04.529: INFO: (2) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 9.814737ms) Mar 16 22:02:04.530: INFO: (2) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 10.352288ms) Mar 16 22:02:04.533: INFO: (3) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 3.285449ms) Mar 16 22:02:04.533: INFO: (3) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 3.235714ms) Mar 16 22:02:04.534: INFO: (3) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 4.269521ms) Mar 16 22:02:04.535: INFO: (3) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 4.960191ms) Mar 16 22:02:04.536: INFO: (3) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 5.525111ms) Mar 16 22:02:04.536: INFO: (3) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test<... (200; 5.585108ms) Mar 16 22:02:04.536: INFO: (3) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 5.650854ms) Mar 16 22:02:04.536: INFO: (3) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 5.702984ms) Mar 16 22:02:04.536: INFO: (3) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 5.783638ms) Mar 16 22:02:04.536: INFO: (3) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 5.744574ms) Mar 16 22:02:04.536: INFO: (3) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 5.990026ms) Mar 16 22:02:04.536: INFO: (3) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 6.180291ms) Mar 16 22:02:04.536: INFO: (3) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 6.285816ms) Mar 16 22:02:04.540: INFO: (4) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 3.575019ms) Mar 16 22:02:04.540: INFO: (4) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 3.654204ms) Mar 16 22:02:04.541: INFO: (4) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 4.644674ms) Mar 16 22:02:04.541: INFO: (4) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 4.849131ms) Mar 16 22:02:04.542: INFO: (4) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 5.053756ms) Mar 16 22:02:04.542: INFO: (4) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 5.109371ms) Mar 16 22:02:04.542: INFO: (4) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 5.107482ms) Mar 16 22:02:04.542: INFO: (4) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 5.339545ms) Mar 16 22:02:04.542: INFO: (4) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: ... (200; 3.606083ms) Mar 16 22:02:04.546: INFO: (5) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 3.668177ms) Mar 16 22:02:04.547: INFO: (5) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 3.871825ms) Mar 16 22:02:04.547: INFO: (5) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test (200; 5.009508ms) Mar 16 22:02:04.548: INFO: (5) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 5.121864ms) Mar 16 22:02:04.548: INFO: (5) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 5.158567ms) Mar 16 22:02:04.548: INFO: (5) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 5.252502ms) Mar 16 22:02:04.550: INFO: (6) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 2.070758ms) Mar 16 22:02:04.552: INFO: (6) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 4.308793ms) Mar 16 22:02:04.552: INFO: (6) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 4.398149ms) Mar 16 22:02:04.552: INFO: (6) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 4.361202ms) Mar 16 22:02:04.553: INFO: (6) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 4.722281ms) Mar 16 22:02:04.553: INFO: (6) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 4.74132ms) Mar 16 22:02:04.553: INFO: (6) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 4.93389ms) Mar 16 22:02:04.553: INFO: (6) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 5.164785ms) Mar 16 22:02:04.553: INFO: (6) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 5.188298ms) Mar 16 22:02:04.553: INFO: (6) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 5.252248ms) Mar 16 22:02:04.553: INFO: (6) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 5.279473ms) Mar 16 22:02:04.553: INFO: (6) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 5.274691ms) Mar 16 22:02:04.553: INFO: (6) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test (200; 3.807439ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 3.613607ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 4.919081ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 4.100558ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 4.684441ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 4.995608ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 4.204166ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 5.080309ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 4.720884ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 5.43975ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 4.923237ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 5.341975ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 5.569355ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 4.574828ms) Mar 16 22:02:04.559: INFO: (7) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 4.918655ms) Mar 16 22:02:04.563: INFO: (8) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 2.917301ms) Mar 16 22:02:04.563: INFO: (8) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 3.486107ms) Mar 16 22:02:04.563: INFO: (8) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 3.575006ms) Mar 16 22:02:04.563: INFO: (8) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 3.788942ms) Mar 16 22:02:04.564: INFO: (8) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 4.534788ms) Mar 16 22:02:04.564: INFO: (8) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 4.544319ms) Mar 16 22:02:04.564: INFO: (8) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 4.62292ms) Mar 16 22:02:04.564: INFO: (8) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test<... (200; 3.045648ms) Mar 16 22:02:04.569: INFO: (9) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 3.053114ms) Mar 16 22:02:04.569: INFO: (9) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 3.450162ms) Mar 16 22:02:04.569: INFO: (9) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 3.529583ms) Mar 16 22:02:04.569: INFO: (9) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 3.630803ms) Mar 16 22:02:04.569: INFO: (9) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test (200; 3.760168ms) Mar 16 22:02:04.569: INFO: (9) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 3.777708ms) Mar 16 22:02:04.569: INFO: (9) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 3.913576ms) Mar 16 22:02:04.570: INFO: (9) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 4.39643ms) Mar 16 22:02:04.571: INFO: (9) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 5.373683ms) Mar 16 22:02:04.571: INFO: (9) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 5.372112ms) Mar 16 22:02:04.571: INFO: (9) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 5.329793ms) Mar 16 22:02:04.571: INFO: (9) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 5.36639ms) Mar 16 22:02:04.571: INFO: (9) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 5.271568ms) Mar 16 22:02:04.571: INFO: (9) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 5.50685ms) Mar 16 22:02:04.575: INFO: (10) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 3.642763ms) Mar 16 22:02:04.575: INFO: (10) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 3.988919ms) Mar 16 22:02:04.575: INFO: (10) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 3.879202ms) Mar 16 22:02:04.575: INFO: (10) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 3.941986ms) Mar 16 22:02:04.575: INFO: (10) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 4.025444ms) Mar 16 22:02:04.575: INFO: (10) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 3.953862ms) Mar 16 22:02:04.575: INFO: (10) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 4.195755ms) Mar 16 22:02:04.575: INFO: (10) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 4.196752ms) Mar 16 22:02:04.575: INFO: (10) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: ... (200; 4.320211ms) Mar 16 22:02:04.581: INFO: (11) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 4.361165ms) Mar 16 22:02:04.581: INFO: (11) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 4.289763ms) Mar 16 22:02:04.581: INFO: (11) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 4.761532ms) Mar 16 22:02:04.581: INFO: (11) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 4.566469ms) Mar 16 22:02:04.581: INFO: (11) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test<... (200; 4.906141ms) Mar 16 22:02:04.582: INFO: (11) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 5.032071ms) Mar 16 22:02:04.582: INFO: (11) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 5.258767ms) Mar 16 22:02:04.582: INFO: (11) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 5.099991ms) Mar 16 22:02:04.582: INFO: (11) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 5.186999ms) Mar 16 22:02:04.582: INFO: (11) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 5.224046ms) Mar 16 22:02:04.582: INFO: (11) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 5.439342ms) Mar 16 22:02:04.584: INFO: (12) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 2.049935ms) Mar 16 22:02:04.586: INFO: (12) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 3.407413ms) Mar 16 22:02:04.586: INFO: (12) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 3.546427ms) Mar 16 22:02:04.590: INFO: (12) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 7.956181ms) Mar 16 22:02:04.592: INFO: (12) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 9.778284ms) Mar 16 22:02:04.592: INFO: (12) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 9.742387ms) Mar 16 22:02:04.592: INFO: (12) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 9.935786ms) Mar 16 22:02:04.592: INFO: (12) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 9.990579ms) Mar 16 22:02:04.592: INFO: (12) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 10.023688ms) Mar 16 22:02:04.592: INFO: (12) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 10.057108ms) Mar 16 22:02:04.592: INFO: (12) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 10.02465ms) Mar 16 22:02:04.592: INFO: (12) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test (200; 2.932894ms) Mar 16 22:02:04.596: INFO: (13) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 2.983896ms) Mar 16 22:02:04.598: INFO: (13) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 4.75154ms) Mar 16 22:02:04.598: INFO: (13) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 4.765814ms) Mar 16 22:02:04.598: INFO: (13) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 4.930044ms) Mar 16 22:02:04.598: INFO: (13) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 4.908476ms) Mar 16 22:02:04.598: INFO: (13) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 5.209675ms) Mar 16 22:02:04.598: INFO: (13) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 5.20866ms) Mar 16 22:02:04.598: INFO: (13) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 5.457406ms) Mar 16 22:02:04.599: INFO: (13) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 6.617632ms) Mar 16 22:02:04.600: INFO: (13) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 6.748192ms) Mar 16 22:02:04.600: INFO: (13) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 6.781566ms) Mar 16 22:02:04.600: INFO: (13) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 6.866669ms) Mar 16 22:02:04.600: INFO: (13) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 6.888681ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 4.742846ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 4.720902ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 4.945232ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 5.015172ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 5.199819ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: ... (200; 5.221671ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 5.268176ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 5.329211ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 5.400264ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 5.443799ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 5.636127ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 5.640651ms) Mar 16 22:02:04.605: INFO: (14) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 5.702321ms) Mar 16 22:02:04.606: INFO: (14) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 5.709209ms) Mar 16 22:02:04.606: INFO: (14) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 6.023998ms) Mar 16 22:02:04.609: INFO: (15) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 3.055811ms) Mar 16 22:02:04.609: INFO: (15) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 3.331751ms) Mar 16 22:02:04.611: INFO: (15) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 4.803636ms) Mar 16 22:02:04.611: INFO: (15) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 4.82499ms) Mar 16 22:02:04.611: INFO: (15) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 4.828027ms) Mar 16 22:02:04.611: INFO: (15) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test (200; 5.570639ms) Mar 16 22:02:04.612: INFO: (15) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 5.662272ms) Mar 16 22:02:04.612: INFO: (15) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 5.724598ms) Mar 16 22:02:04.612: INFO: (15) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 5.736671ms) Mar 16 22:02:04.612: INFO: (15) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 5.985992ms) Mar 16 22:02:04.612: INFO: (15) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 6.327347ms) Mar 16 22:02:04.612: INFO: (15) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 6.379525ms) Mar 16 22:02:04.612: INFO: (15) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 6.455034ms) Mar 16 22:02:04.616: INFO: (16) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 3.499256ms) Mar 16 22:02:04.616: INFO: (16) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 3.573117ms) Mar 16 22:02:04.616: INFO: (16) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 3.537123ms) Mar 16 22:02:04.616: INFO: (16) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 3.486248ms) Mar 16 22:02:04.616: INFO: (16) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test<... (200; 4.048581ms) Mar 16 22:02:04.617: INFO: (16) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 4.72592ms) Mar 16 22:02:04.617: INFO: (16) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 4.870335ms) Mar 16 22:02:04.617: INFO: (16) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 5.022125ms) Mar 16 22:02:04.618: INFO: (16) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 5.322626ms) Mar 16 22:02:04.618: INFO: (16) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 5.342897ms) Mar 16 22:02:04.618: INFO: (16) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 5.772053ms) Mar 16 22:02:04.622: INFO: (17) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 3.543432ms) Mar 16 22:02:04.622: INFO: (17) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test<... (200; 4.403427ms) Mar 16 22:02:04.623: INFO: (17) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 4.714145ms) Mar 16 22:02:04.623: INFO: (17) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 4.698459ms) Mar 16 22:02:04.623: INFO: (17) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 4.86738ms) Mar 16 22:02:04.623: INFO: (17) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 5.094213ms) Mar 16 22:02:04.623: INFO: (17) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 5.260338ms) Mar 16 22:02:04.623: INFO: (17) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:162/proxy/: bar (200; 5.189679ms) Mar 16 22:02:04.624: INFO: (17) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 5.223515ms) Mar 16 22:02:04.624: INFO: (17) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 5.253856ms) Mar 16 22:02:04.624: INFO: (17) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 5.262414ms) Mar 16 22:02:04.624: INFO: (17) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 5.301233ms) Mar 16 22:02:04.628: INFO: (18) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n/proxy/: test (200; 3.919968ms) Mar 16 22:02:04.628: INFO: (18) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:460/proxy/: tls baz (200; 3.887606ms) Mar 16 22:02:04.628: INFO: (18) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:462/proxy/: tls qux (200; 4.061626ms) Mar 16 22:02:04.628: INFO: (18) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 3.956025ms) Mar 16 22:02:04.628: INFO: (18) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:160/proxy/: foo (200; 3.860454ms) Mar 16 22:02:04.628: INFO: (18) /api/v1/namespaces/proxy-9272/pods/proxy-service-st4qf-r5h7n:1080/proxy/: test<... (200; 4.002911ms) Mar 16 22:02:04.628: INFO: (18) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 3.985487ms) Mar 16 22:02:04.628: INFO: (18) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test (200; 4.444895ms) Mar 16 22:02:04.634: INFO: (19) /api/v1/namespaces/proxy-9272/pods/http:proxy-service-st4qf-r5h7n:1080/proxy/: ... (200; 4.339869ms) Mar 16 22:02:04.634: INFO: (19) /api/v1/namespaces/proxy-9272/pods/https:proxy-service-st4qf-r5h7n:443/proxy/: test<... (200; 4.386731ms) Mar 16 22:02:04.634: INFO: (19) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname2/proxy/: bar (200; 4.555601ms) Mar 16 22:02:04.634: INFO: (19) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname2/proxy/: tls qux (200; 4.487926ms) Mar 16 22:02:04.635: INFO: (19) /api/v1/namespaces/proxy-9272/services/proxy-service-st4qf:portname1/proxy/: foo (200; 4.932662ms) Mar 16 22:02:04.635: INFO: (19) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname1/proxy/: foo (200; 5.180965ms) Mar 16 22:02:04.635: INFO: (19) /api/v1/namespaces/proxy-9272/services/http:proxy-service-st4qf:portname2/proxy/: bar (200; 5.133165ms) Mar 16 22:02:04.635: INFO: (19) /api/v1/namespaces/proxy-9272/services/https:proxy-service-st4qf:tlsportname1/proxy/: tls baz (200; 5.136098ms) STEP: deleting ReplicationController proxy-service-st4qf in namespace proxy-9272, will wait for the garbage collector to delete the pods Mar 16 22:02:04.694: INFO: Deleting ReplicationController proxy-service-st4qf took: 6.610049ms Mar 16 22:02:04.994: INFO: Terminating ReplicationController proxy-service-st4qf pods took: 300.250331ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:02:09.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9272" for this suite. • [SLOW TEST:18.179 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":196,"skipped":3427,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:02:09.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8199 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8199 I0316 22:02:09.662397 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8199, replica count: 2 I0316 22:02:12.712847 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 22:02:15.713045 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 22:02:15.713: INFO: Creating new exec pod Mar 16 22:02:20.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8199 execpodmvhxd -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 16 22:02:20.978: INFO: stderr: "I0316 22:02:20.883759 3197 log.go:172] (0xc0005bcdc0) (0xc0006a1a40) Create stream\nI0316 22:02:20.883813 3197 log.go:172] (0xc0005bcdc0) (0xc0006a1a40) Stream added, broadcasting: 1\nI0316 22:02:20.886788 3197 log.go:172] (0xc0005bcdc0) Reply frame received for 1\nI0316 22:02:20.886850 3197 log.go:172] (0xc0005bcdc0) (0xc000a2e000) Create stream\nI0316 22:02:20.886921 3197 log.go:172] (0xc0005bcdc0) (0xc000a2e000) Stream added, broadcasting: 3\nI0316 22:02:20.888024 3197 log.go:172] (0xc0005bcdc0) Reply frame received for 3\nI0316 22:02:20.888073 3197 log.go:172] (0xc0005bcdc0) (0xc0006a1c20) Create stream\nI0316 22:02:20.888086 3197 log.go:172] (0xc0005bcdc0) (0xc0006a1c20) Stream added, broadcasting: 5\nI0316 22:02:20.889329 3197 log.go:172] (0xc0005bcdc0) Reply frame received for 5\nI0316 22:02:20.970253 3197 log.go:172] (0xc0005bcdc0) Data frame received for 5\nI0316 22:02:20.970298 3197 log.go:172] (0xc0006a1c20) (5) Data frame handling\nI0316 22:02:20.970343 3197 log.go:172] (0xc0006a1c20) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0316 22:02:20.970823 3197 log.go:172] (0xc0005bcdc0) Data frame received for 5\nI0316 22:02:20.970849 3197 log.go:172] (0xc0006a1c20) (5) Data frame handling\nI0316 22:02:20.970860 3197 log.go:172] (0xc0006a1c20) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0316 22:02:20.972076 3197 log.go:172] (0xc0005bcdc0) Data frame received for 3\nI0316 22:02:20.972100 3197 log.go:172] (0xc000a2e000) (3) Data frame handling\nI0316 22:02:20.972125 3197 log.go:172] (0xc0005bcdc0) Data frame received for 5\nI0316 22:02:20.972154 3197 log.go:172] (0xc0006a1c20) (5) Data frame handling\nI0316 22:02:20.973765 3197 log.go:172] (0xc0005bcdc0) Data frame received for 1\nI0316 22:02:20.973783 3197 log.go:172] (0xc0006a1a40) (1) Data frame handling\nI0316 22:02:20.973794 3197 log.go:172] (0xc0006a1a40) (1) Data frame sent\nI0316 22:02:20.973805 3197 log.go:172] (0xc0005bcdc0) (0xc0006a1a40) Stream removed, broadcasting: 1\nI0316 22:02:20.973864 3197 log.go:172] (0xc0005bcdc0) Go away received\nI0316 22:02:20.974175 3197 log.go:172] (0xc0005bcdc0) (0xc0006a1a40) Stream removed, broadcasting: 1\nI0316 22:02:20.974192 3197 log.go:172] (0xc0005bcdc0) (0xc000a2e000) Stream removed, broadcasting: 3\nI0316 22:02:20.974202 3197 log.go:172] (0xc0005bcdc0) (0xc0006a1c20) Stream removed, broadcasting: 5\n" Mar 16 22:02:20.978: INFO: stdout: "" Mar 16 22:02:20.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8199 execpodmvhxd -- /bin/sh -x -c nc -zv -t -w 2 10.103.199.109 80' Mar 16 22:02:21.175: INFO: stderr: "I0316 22:02:21.099982 3218 log.go:172] (0xc00065aa50) (0xc000693cc0) Create stream\nI0316 22:02:21.100024 3218 log.go:172] (0xc00065aa50) (0xc000693cc0) Stream added, broadcasting: 1\nI0316 22:02:21.104643 3218 log.go:172] (0xc00065aa50) Reply frame received for 1\nI0316 22:02:21.104681 3218 log.go:172] (0xc00065aa50) (0xc000622000) Create stream\nI0316 22:02:21.104690 3218 log.go:172] (0xc00065aa50) (0xc000622000) Stream added, broadcasting: 3\nI0316 22:02:21.106134 3218 log.go:172] (0xc00065aa50) Reply frame received for 3\nI0316 22:02:21.106158 3218 log.go:172] (0xc00065aa50) (0xc000693d60) Create stream\nI0316 22:02:21.106166 3218 log.go:172] (0xc00065aa50) (0xc000693d60) Stream added, broadcasting: 5\nI0316 22:02:21.107144 3218 log.go:172] (0xc00065aa50) Reply frame received for 5\nI0316 22:02:21.168984 3218 log.go:172] (0xc00065aa50) Data frame received for 5\nI0316 22:02:21.169026 3218 log.go:172] (0xc000693d60) (5) Data frame handling\nI0316 22:02:21.169041 3218 log.go:172] (0xc000693d60) (5) Data frame sent\nI0316 22:02:21.169053 3218 log.go:172] (0xc00065aa50) Data frame received for 5\nI0316 22:02:21.169063 3218 log.go:172] (0xc000693d60) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.199.109 80\nConnection to 10.103.199.109 80 port [tcp/http] succeeded!\nI0316 22:02:21.169105 3218 log.go:172] (0xc00065aa50) Data frame received for 3\nI0316 22:02:21.169222 3218 log.go:172] (0xc000622000) (3) Data frame handling\nI0316 22:02:21.170781 3218 log.go:172] (0xc00065aa50) Data frame received for 1\nI0316 22:02:21.170803 3218 log.go:172] (0xc000693cc0) (1) Data frame handling\nI0316 22:02:21.170820 3218 log.go:172] (0xc000693cc0) (1) Data frame sent\nI0316 22:02:21.170837 3218 log.go:172] (0xc00065aa50) (0xc000693cc0) Stream removed, broadcasting: 1\nI0316 22:02:21.170859 3218 log.go:172] (0xc00065aa50) Go away received\nI0316 22:02:21.171315 3218 log.go:172] (0xc00065aa50) (0xc000693cc0) Stream removed, broadcasting: 1\nI0316 22:02:21.171343 3218 log.go:172] (0xc00065aa50) (0xc000622000) Stream removed, broadcasting: 3\nI0316 22:02:21.171356 3218 log.go:172] (0xc00065aa50) (0xc000693d60) Stream removed, broadcasting: 5\n" Mar 16 22:02:21.175: INFO: stdout: "" Mar 16 22:02:21.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8199 execpodmvhxd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31573' Mar 16 22:02:21.383: INFO: stderr: "I0316 22:02:21.304959 3240 log.go:172] (0xc0000f3340) (0xc0004101e0) Create stream\nI0316 22:02:21.305019 3240 log.go:172] (0xc0000f3340) (0xc0004101e0) Stream added, broadcasting: 1\nI0316 22:02:21.307633 3240 log.go:172] (0xc0000f3340) Reply frame received for 1\nI0316 22:02:21.307679 3240 log.go:172] (0xc0000f3340) (0xc00062b9a0) Create stream\nI0316 22:02:21.307693 3240 log.go:172] (0xc0000f3340) (0xc00062b9a0) Stream added, broadcasting: 3\nI0316 22:02:21.308623 3240 log.go:172] (0xc0000f3340) Reply frame received for 3\nI0316 22:02:21.308658 3240 log.go:172] (0xc0000f3340) (0xc00062bb80) Create stream\nI0316 22:02:21.308670 3240 log.go:172] (0xc0000f3340) (0xc00062bb80) Stream added, broadcasting: 5\nI0316 22:02:21.309672 3240 log.go:172] (0xc0000f3340) Reply frame received for 5\nI0316 22:02:21.376474 3240 log.go:172] (0xc0000f3340) Data frame received for 3\nI0316 22:02:21.376509 3240 log.go:172] (0xc00062b9a0) (3) Data frame handling\nI0316 22:02:21.376666 3240 log.go:172] (0xc0000f3340) Data frame received for 5\nI0316 22:02:21.376693 3240 log.go:172] (0xc00062bb80) (5) Data frame handling\nI0316 22:02:21.376720 3240 log.go:172] (0xc00062bb80) (5) Data frame sent\nI0316 22:02:21.376734 3240 log.go:172] (0xc0000f3340) Data frame received for 5\nI0316 22:02:21.376744 3240 log.go:172] (0xc00062bb80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31573\nConnection to 172.17.0.10 31573 port [tcp/31573] succeeded!\nI0316 22:02:21.378316 3240 log.go:172] (0xc0000f3340) Data frame received for 1\nI0316 22:02:21.378360 3240 log.go:172] (0xc0004101e0) (1) Data frame handling\nI0316 22:02:21.378457 3240 log.go:172] (0xc0004101e0) (1) Data frame sent\nI0316 22:02:21.378482 3240 log.go:172] (0xc0000f3340) (0xc0004101e0) Stream removed, broadcasting: 1\nI0316 22:02:21.378499 3240 log.go:172] (0xc0000f3340) Go away received\nI0316 22:02:21.378922 3240 log.go:172] (0xc0000f3340) (0xc0004101e0) Stream removed, broadcasting: 1\nI0316 22:02:21.378947 3240 log.go:172] (0xc0000f3340) (0xc00062b9a0) Stream removed, broadcasting: 3\nI0316 22:02:21.378960 3240 log.go:172] (0xc0000f3340) (0xc00062bb80) Stream removed, broadcasting: 5\n" Mar 16 22:02:21.383: INFO: stdout: "" Mar 16 22:02:21.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8199 execpodmvhxd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31573' Mar 16 22:02:21.581: INFO: stderr: "I0316 22:02:21.512120 3264 log.go:172] (0xc0007980b0) (0xc0005166e0) Create stream\nI0316 22:02:21.512170 3264 log.go:172] (0xc0007980b0) (0xc0005166e0) Stream added, broadcasting: 1\nI0316 22:02:21.514034 3264 log.go:172] (0xc0007980b0) Reply frame received for 1\nI0316 22:02:21.514058 3264 log.go:172] (0xc0007980b0) (0xc0008a0000) Create stream\nI0316 22:02:21.514065 3264 log.go:172] (0xc0007980b0) (0xc0008a0000) Stream added, broadcasting: 3\nI0316 22:02:21.514730 3264 log.go:172] (0xc0007980b0) Reply frame received for 3\nI0316 22:02:21.514766 3264 log.go:172] (0xc0007980b0) (0xc0009b6000) Create stream\nI0316 22:02:21.514784 3264 log.go:172] (0xc0007980b0) (0xc0009b6000) Stream added, broadcasting: 5\nI0316 22:02:21.515336 3264 log.go:172] (0xc0007980b0) Reply frame received for 5\nI0316 22:02:21.576432 3264 log.go:172] (0xc0007980b0) Data frame received for 3\nI0316 22:02:21.576530 3264 log.go:172] (0xc0008a0000) (3) Data frame handling\nI0316 22:02:21.576556 3264 log.go:172] (0xc0007980b0) Data frame received for 5\nI0316 22:02:21.576571 3264 log.go:172] (0xc0009b6000) (5) Data frame handling\nI0316 22:02:21.576582 3264 log.go:172] (0xc0009b6000) (5) Data frame sent\nI0316 22:02:21.576595 3264 log.go:172] (0xc0007980b0) Data frame received for 5\nI0316 22:02:21.576604 3264 log.go:172] (0xc0009b6000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31573\nConnection to 172.17.0.8 31573 port [tcp/31573] succeeded!\nI0316 22:02:21.578387 3264 log.go:172] (0xc0007980b0) Data frame received for 1\nI0316 22:02:21.578403 3264 log.go:172] (0xc0005166e0) (1) Data frame handling\nI0316 22:02:21.578456 3264 log.go:172] (0xc0005166e0) (1) Data frame sent\nI0316 22:02:21.578567 3264 log.go:172] (0xc0007980b0) (0xc0005166e0) Stream removed, broadcasting: 1\nI0316 22:02:21.578616 3264 log.go:172] (0xc0007980b0) Go away received\nI0316 22:02:21.578948 3264 log.go:172] (0xc0007980b0) (0xc0005166e0) Stream removed, broadcasting: 1\nI0316 22:02:21.578960 3264 log.go:172] (0xc0007980b0) (0xc0008a0000) Stream removed, broadcasting: 3\nI0316 22:02:21.578965 3264 log.go:172] (0xc0007980b0) (0xc0009b6000) Stream removed, broadcasting: 5\n" Mar 16 22:02:21.581: INFO: stdout: "" Mar 16 22:02:21.581: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:02:21.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8199" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.110 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":197,"skipped":3434,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:02:21.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 22:02:21.684: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:02:22.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4954" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":198,"skipped":3436,"failed":0} ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:02:22.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 16 22:02:22.891: INFO: created pod pod-service-account-defaultsa Mar 16 22:02:22.892: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 16 22:02:22.914: INFO: created pod pod-service-account-mountsa Mar 16 22:02:22.914: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 16 22:02:22.922: INFO: created pod pod-service-account-nomountsa Mar 16 22:02:22.922: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 16 22:02:22.934: INFO: created pod pod-service-account-defaultsa-mountspec Mar 16 22:02:22.934: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 16 22:02:22.953: INFO: created pod pod-service-account-mountsa-mountspec Mar 16 22:02:22.953: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 16 22:02:22.958: INFO: created pod pod-service-account-nomountsa-mountspec Mar 16 22:02:22.958: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 16 22:02:22.971: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 16 22:02:22.971: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 16 22:02:22.990: INFO: created pod pod-service-account-mountsa-nomountspec Mar 16 22:02:22.990: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 16 22:02:23.064: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 16 22:02:23.064: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:02:23.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-986" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":199,"skipped":3436,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:02:23.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 22:02:23.238: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:02:24.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-493" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":200,"skipped":3443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:02:24.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 16 22:02:25.953: INFO: Waiting up to 5m0s for pod "client-containers-68080d6e-e8da-4cf4-a09d-04e74949c021" in namespace "containers-3570" to be "success or failure" Mar 16 22:02:26.228: INFO: Pod "client-containers-68080d6e-e8da-4cf4-a09d-04e74949c021": Phase="Pending", Reason="", readiness=false. Elapsed: 274.786996ms Mar 16 22:02:28.466: INFO: Pod "client-containers-68080d6e-e8da-4cf4-a09d-04e74949c021": Phase="Pending", Reason="", readiness=false. Elapsed: 2.512774695s Mar 16 22:02:30.490: INFO: Pod "client-containers-68080d6e-e8da-4cf4-a09d-04e74949c021": Phase="Pending", Reason="", readiness=false. Elapsed: 4.536607188s Mar 16 22:02:32.669: INFO: Pod "client-containers-68080d6e-e8da-4cf4-a09d-04e74949c021": Phase="Pending", Reason="", readiness=false. Elapsed: 6.715814422s Mar 16 22:02:34.698: INFO: Pod "client-containers-68080d6e-e8da-4cf4-a09d-04e74949c021": Phase="Running", Reason="", readiness=true. Elapsed: 8.745076542s Mar 16 22:02:36.702: INFO: Pod "client-containers-68080d6e-e8da-4cf4-a09d-04e74949c021": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.7489841s STEP: Saw pod success Mar 16 22:02:36.702: INFO: Pod "client-containers-68080d6e-e8da-4cf4-a09d-04e74949c021" satisfied condition "success or failure" Mar 16 22:02:36.705: INFO: Trying to get logs from node jerma-worker2 pod client-containers-68080d6e-e8da-4cf4-a09d-04e74949c021 container test-container: STEP: delete the pod Mar 16 22:02:36.727: INFO: Waiting for pod client-containers-68080d6e-e8da-4cf4-a09d-04e74949c021 to disappear Mar 16 22:02:36.731: INFO: Pod client-containers-68080d6e-e8da-4cf4-a09d-04e74949c021 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:02:36.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3570" for this suite. • [SLOW TEST:11.758 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3495,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:02:36.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:03:04.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-136" for this suite. • [SLOW TEST:27.625 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3516,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:03:04.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:03:15.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5720" for this suite. • [SLOW TEST:11.299 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":203,"skipped":3520,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:03:15.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:03:19.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1757" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3529,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:03:19.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 16 22:03:19.880: INFO: Waiting up to 5m0s for pod "downward-api-617c60bc-12d2-42b3-95dc-744bd70f4191" in namespace "downward-api-7674" to be "success or failure" Mar 16 22:03:19.884: INFO: Pod "downward-api-617c60bc-12d2-42b3-95dc-744bd70f4191": Phase="Pending", Reason="", readiness=false. Elapsed: 3.523285ms Mar 16 22:03:21.888: INFO: Pod "downward-api-617c60bc-12d2-42b3-95dc-744bd70f4191": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007619161s Mar 16 22:03:23.892: INFO: Pod "downward-api-617c60bc-12d2-42b3-95dc-744bd70f4191": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011848375s STEP: Saw pod success Mar 16 22:03:23.892: INFO: Pod "downward-api-617c60bc-12d2-42b3-95dc-744bd70f4191" satisfied condition "success or failure" Mar 16 22:03:23.895: INFO: Trying to get logs from node jerma-worker2 pod downward-api-617c60bc-12d2-42b3-95dc-744bd70f4191 container dapi-container: STEP: delete the pod Mar 16 22:03:23.916: INFO: Waiting for pod downward-api-617c60bc-12d2-42b3-95dc-744bd70f4191 to disappear Mar 16 22:03:23.920: INFO: Pod downward-api-617c60bc-12d2-42b3-95dc-744bd70f4191 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:03:23.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7674" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3532,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:03:23.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 16 22:03:24.008: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 22:03:24.032: INFO: Waiting for terminating namespaces to be deleted... Mar 16 22:03:24.035: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 16 22:03:24.041: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 22:03:24.041: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 22:03:24.041: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 22:03:24.041: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 22:03:24.041: INFO: busybox-scheduling-0d883626-a78c-448a-bcc3-f5271cb21111 from kubelet-test-1757 started at 2020-03-16 22:03:15 +0000 UTC (1 container statuses recorded) Mar 16 22:03:24.041: INFO: Container busybox-scheduling-0d883626-a78c-448a-bcc3-f5271cb21111 ready: true, restart count 0 Mar 16 22:03:24.041: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 16 22:03:24.046: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 22:03:24.046: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 22:03:24.046: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 22:03:24.046: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-be3a69ac-cfe5-454f-b93a-f7b19acf2e33 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-be3a69ac-cfe5-454f-b93a-f7b19acf2e33 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-be3a69ac-cfe5-454f-b93a-f7b19acf2e33 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:03:40.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-649" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.463 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":206,"skipped":3537,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:03:40.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 16 22:03:40.450: INFO: Waiting up to 5m0s for pod "downward-api-dfc27f61-7122-4a93-854a-b9d01fd192e9" in namespace "downward-api-6016" to be "success or failure" Mar 16 22:03:40.472: INFO: Pod "downward-api-dfc27f61-7122-4a93-854a-b9d01fd192e9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.065405ms Mar 16 22:03:42.476: INFO: Pod "downward-api-dfc27f61-7122-4a93-854a-b9d01fd192e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025888196s Mar 16 22:03:44.480: INFO: Pod "downward-api-dfc27f61-7122-4a93-854a-b9d01fd192e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029875155s STEP: Saw pod success Mar 16 22:03:44.480: INFO: Pod "downward-api-dfc27f61-7122-4a93-854a-b9d01fd192e9" satisfied condition "success or failure" Mar 16 22:03:44.483: INFO: Trying to get logs from node jerma-worker pod downward-api-dfc27f61-7122-4a93-854a-b9d01fd192e9 container dapi-container: STEP: delete the pod Mar 16 22:03:44.514: INFO: Waiting for pod downward-api-dfc27f61-7122-4a93-854a-b9d01fd192e9 to disappear Mar 16 22:03:44.555: INFO: Pod downward-api-dfc27f61-7122-4a93-854a-b9d01fd192e9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:03:44.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6016" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3546,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:03:44.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 22:03:44.981: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 22:03:47.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993024, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993024, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993025, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993024, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 22:03:49.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993024, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993024, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993025, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993024, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 22:03:52.322: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:03:52.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4746" for this suite. STEP: Destroying namespace "webhook-4746-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.030 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":208,"skipped":3548,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:03:52.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 22:03:52.662: INFO: Create a RollingUpdate DaemonSet Mar 16 22:03:52.666: INFO: Check that daemon pods launch on every node of the cluster Mar 16 22:03:52.671: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:03:52.717: INFO: Number of nodes with available pods: 0 Mar 16 22:03:52.717: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:03:53.722: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:03:53.726: INFO: Number of nodes with available pods: 0 Mar 16 22:03:53.726: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:03:54.723: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:03:54.726: INFO: Number of nodes with available pods: 0 Mar 16 22:03:54.726: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:03:56.006: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:03:56.149: INFO: Number of nodes with available pods: 0 Mar 16 22:03:56.149: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:03:56.723: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:03:56.726: INFO: Number of nodes with available pods: 0 Mar 16 22:03:56.727: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:03:57.727: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:03:57.731: INFO: Number of nodes with available pods: 2 Mar 16 22:03:57.731: INFO: Number of running nodes: 2, number of available pods: 2 Mar 16 22:03:57.731: INFO: Update the DaemonSet to trigger a rollout Mar 16 22:03:57.742: INFO: Updating DaemonSet daemon-set Mar 16 22:04:01.758: INFO: Roll back the DaemonSet before rollout is complete Mar 16 22:04:01.764: INFO: Updating DaemonSet daemon-set Mar 16 22:04:01.764: INFO: Make sure DaemonSet rollback is complete Mar 16 22:04:01.772: INFO: Wrong image for pod: daemon-set-72s4b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 16 22:04:01.772: INFO: Pod daemon-set-72s4b is not available Mar 16 22:04:01.826: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:04:02.830: INFO: Wrong image for pod: daemon-set-72s4b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 16 22:04:02.830: INFO: Pod daemon-set-72s4b is not available Mar 16 22:04:02.835: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:04:03.830: INFO: Wrong image for pod: daemon-set-72s4b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 16 22:04:03.830: INFO: Pod daemon-set-72s4b is not available Mar 16 22:04:03.833: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:04:04.829: INFO: Pod daemon-set-rnbgx is not available Mar 16 22:04:04.833: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4267, will wait for the garbage collector to delete the pods Mar 16 22:04:04.898: INFO: Deleting DaemonSet.extensions daemon-set took: 5.935524ms Mar 16 22:04:05.198: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.239488ms Mar 16 22:04:09.502: INFO: Number of nodes with available pods: 0 Mar 16 22:04:09.502: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 22:04:09.505: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4267/daemonsets","resourceVersion":"332740"},"items":null} Mar 16 22:04:09.507: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4267/pods","resourceVersion":"332740"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:04:09.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4267" for this suite. • [SLOW TEST:16.929 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":209,"skipped":3568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:04:09.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 16 22:04:14.167: INFO: Successfully updated pod "labelsupdate71a1b9a1-3c20-4019-821f-ea541afc6891" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:04:16.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7720" for this suite. • [SLOW TEST:6.676 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3593,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:04:16.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 16 22:04:20.858: INFO: Successfully updated pod "annotationupdate0d3a6659-5ba1-44bb-b7dc-a43fefded77e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:04:22.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8129" for this suite. • [SLOW TEST:6.711 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3595,"failed":0} SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:04:22.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 22:04:22.971: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.696574ms) Mar 16 22:04:22.975: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.354975ms) Mar 16 22:04:22.979: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.774293ms) Mar 16 22:04:22.982: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.316665ms) Mar 16 22:04:23.009: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 27.260305ms) Mar 16 22:04:23.014: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.04997ms) Mar 16 22:04:23.016: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.510699ms) Mar 16 22:04:23.019: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.57003ms) Mar 16 22:04:23.021: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.734928ms) Mar 16 22:04:23.024: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.476382ms) Mar 16 22:04:23.027: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.849624ms) Mar 16 22:04:23.029: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.582725ms) Mar 16 22:04:23.032: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.750259ms) Mar 16 22:04:23.035: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.044003ms) Mar 16 22:04:23.039: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.274693ms) Mar 16 22:04:23.043: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.316026ms) Mar 16 22:04:23.046: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.282055ms) Mar 16 22:04:23.050: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.350396ms) Mar 16 22:04:23.053: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.28682ms) Mar 16 22:04:23.056: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.261067ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:04:23.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-169" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":212,"skipped":3601,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:04:23.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8814 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8814 STEP: Creating statefulset with conflicting port in namespace statefulset-8814 STEP: Waiting until pod test-pod will start running in namespace statefulset-8814 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8814 Mar 16 22:04:27.211: INFO: Observed stateful pod in namespace: statefulset-8814, name: ss-0, uid: ee8bfa13-78c4-4eb6-ba26-91a46296a6d7, status phase: Failed. Waiting for statefulset controller to delete. Mar 16 22:04:27.243: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8814 STEP: Removing pod with conflicting port in namespace statefulset-8814 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8814 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 16 22:04:31.323: INFO: Deleting all statefulset in ns statefulset-8814 Mar 16 22:04:31.333: INFO: Scaling statefulset ss to 0 Mar 16 22:04:41.373: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 22:04:41.377: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:04:41.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8814" for this suite. • [SLOW TEST:18.336 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":213,"skipped":3608,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:04:41.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 16 22:04:41.503: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:04:41.516: INFO: Number of nodes with available pods: 0 Mar 16 22:04:41.516: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:04:42.653: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:04:42.656: INFO: Number of nodes with available pods: 0 Mar 16 22:04:42.656: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:04:43.605: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:04:43.609: INFO: Number of nodes with available pods: 0 Mar 16 22:04:43.609: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:04:44.541: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:04:44.552: INFO: Number of nodes with available pods: 0 Mar 16 22:04:44.552: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:04:45.521: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:04:45.525: INFO: Number of nodes with available pods: 2 Mar 16 22:04:45.525: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 16 22:04:45.546: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:04:45.552: INFO: Number of nodes with available pods: 2 Mar 16 22:04:45.552: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6146, will wait for the garbage collector to delete the pods Mar 16 22:04:46.748: INFO: Deleting DaemonSet.extensions daemon-set took: 8.992157ms Mar 16 22:04:46.948: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.25565ms Mar 16 22:04:59.552: INFO: Number of nodes with available pods: 0 Mar 16 22:04:59.552: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 22:04:59.556: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6146/daemonsets","resourceVersion":"333181"},"items":null} Mar 16 22:04:59.559: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6146/pods","resourceVersion":"333181"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:04:59.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6146" for this suite. • [SLOW TEST:18.176 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":214,"skipped":3629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:04:59.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-63cbe9dc-00be-4ceb-bc09-7a60b9672ffe in namespace container-probe-8932 Mar 16 22:05:03.675: INFO: Started pod busybox-63cbe9dc-00be-4ceb-bc09-7a60b9672ffe in namespace container-probe-8932 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 22:05:03.678: INFO: Initial restart count of pod busybox-63cbe9dc-00be-4ceb-bc09-7a60b9672ffe is 0 Mar 16 22:05:49.788: INFO: Restart count of pod container-probe-8932/busybox-63cbe9dc-00be-4ceb-bc09-7a60b9672ffe is now 1 (46.109771349s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:05:49.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8932" for this suite. • [SLOW TEST:50.251 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3654,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:05:49.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:05:53.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8211" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3665,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:05:53.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 22:05:54.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-108d43d3-a23f-4dad-93c7-2b226d2d7738" in namespace "downward-api-1500" to be "success or failure" Mar 16 22:05:54.027: INFO: Pod "downwardapi-volume-108d43d3-a23f-4dad-93c7-2b226d2d7738": Phase="Pending", Reason="", readiness=false. Elapsed: 3.046194ms Mar 16 22:05:56.031: INFO: Pod "downwardapi-volume-108d43d3-a23f-4dad-93c7-2b226d2d7738": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007346675s Mar 16 22:05:58.035: INFO: Pod "downwardapi-volume-108d43d3-a23f-4dad-93c7-2b226d2d7738": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011486514s STEP: Saw pod success Mar 16 22:05:58.035: INFO: Pod "downwardapi-volume-108d43d3-a23f-4dad-93c7-2b226d2d7738" satisfied condition "success or failure" Mar 16 22:05:58.039: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-108d43d3-a23f-4dad-93c7-2b226d2d7738 container client-container: STEP: delete the pod Mar 16 22:05:58.072: INFO: Waiting for pod downwardapi-volume-108d43d3-a23f-4dad-93c7-2b226d2d7738 to disappear Mar 16 22:05:58.087: INFO: Pod downwardapi-volume-108d43d3-a23f-4dad-93c7-2b226d2d7738 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:05:58.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1500" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:05:58.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 16 22:05:58.137: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:06:03.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4950" for this suite. • [SLOW TEST:5.493 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":218,"skipped":3752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:06:03.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-3228e97e-3dcb-4319-962a-11721ee286cd STEP: Creating a pod to test consume configMaps Mar 16 22:06:03.655: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9a29394f-7dd2-488b-8164-b2c75afa80a6" in namespace "projected-406" to be "success or failure" Mar 16 22:06:03.679: INFO: Pod "pod-projected-configmaps-9a29394f-7dd2-488b-8164-b2c75afa80a6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.560996ms Mar 16 22:06:05.743: INFO: Pod "pod-projected-configmaps-9a29394f-7dd2-488b-8164-b2c75afa80a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088310146s Mar 16 22:06:07.747: INFO: Pod "pod-projected-configmaps-9a29394f-7dd2-488b-8164-b2c75afa80a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092171794s STEP: Saw pod success Mar 16 22:06:07.747: INFO: Pod "pod-projected-configmaps-9a29394f-7dd2-488b-8164-b2c75afa80a6" satisfied condition "success or failure" Mar 16 22:06:07.750: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-9a29394f-7dd2-488b-8164-b2c75afa80a6 container projected-configmap-volume-test: STEP: delete the pod Mar 16 22:06:07.799: INFO: Waiting for pod pod-projected-configmaps-9a29394f-7dd2-488b-8164-b2c75afa80a6 to disappear Mar 16 22:06:07.817: INFO: Pod pod-projected-configmaps-9a29394f-7dd2-488b-8164-b2c75afa80a6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:06:07.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-406" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3789,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:06:07.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-2e6f6589-2f7e-4687-ad8a-84e7aaafcaa9 STEP: Creating a pod to test consume secrets Mar 16 22:06:07.918: INFO: Waiting up to 5m0s for pod "pod-secrets-3fe6f437-94ea-4bf7-8bc6-2170d9f82cbe" in namespace "secrets-1121" to be "success or failure" Mar 16 22:06:07.922: INFO: Pod "pod-secrets-3fe6f437-94ea-4bf7-8bc6-2170d9f82cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087162ms Mar 16 22:06:09.954: INFO: Pod "pod-secrets-3fe6f437-94ea-4bf7-8bc6-2170d9f82cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035651013s Mar 16 22:06:11.962: INFO: Pod "pod-secrets-3fe6f437-94ea-4bf7-8bc6-2170d9f82cbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044068618s STEP: Saw pod success Mar 16 22:06:11.962: INFO: Pod "pod-secrets-3fe6f437-94ea-4bf7-8bc6-2170d9f82cbe" satisfied condition "success or failure" Mar 16 22:06:11.965: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-3fe6f437-94ea-4bf7-8bc6-2170d9f82cbe container secret-volume-test: STEP: delete the pod Mar 16 22:06:11.981: INFO: Waiting for pod pod-secrets-3fe6f437-94ea-4bf7-8bc6-2170d9f82cbe to disappear Mar 16 22:06:11.986: INFO: Pod pod-secrets-3fe6f437-94ea-4bf7-8bc6-2170d9f82cbe no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:06:11.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1121" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3828,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:06:11.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 16 22:06:12.066: INFO: Waiting up to 5m0s for pod "pod-8b994f0d-a363-4855-a044-ccc41f399b7d" in namespace "emptydir-9273" to be "success or failure" Mar 16 22:06:12.081: INFO: Pod "pod-8b994f0d-a363-4855-a044-ccc41f399b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.547481ms Mar 16 22:06:14.085: INFO: Pod "pod-8b994f0d-a363-4855-a044-ccc41f399b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01925534s Mar 16 22:06:16.090: INFO: Pod "pod-8b994f0d-a363-4855-a044-ccc41f399b7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023824243s STEP: Saw pod success Mar 16 22:06:16.090: INFO: Pod "pod-8b994f0d-a363-4855-a044-ccc41f399b7d" satisfied condition "success or failure" Mar 16 22:06:16.093: INFO: Trying to get logs from node jerma-worker2 pod pod-8b994f0d-a363-4855-a044-ccc41f399b7d container test-container: STEP: delete the pod Mar 16 22:06:16.113: INFO: Waiting for pod pod-8b994f0d-a363-4855-a044-ccc41f399b7d to disappear Mar 16 22:06:16.117: INFO: Pod pod-8b994f0d-a363-4855-a044-ccc41f399b7d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:06:16.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9273" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3838,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:06:16.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 22:06:16.197: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:06:17.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8385" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":222,"skipped":3839,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:06:17.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4883 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4883 STEP: creating replication controller externalsvc in namespace services-4883 I0316 22:06:17.382037 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4883, replica count: 2 I0316 22:06:20.432425 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 22:06:23.432748 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 16 22:06:23.510: INFO: Creating new exec pod Mar 16 22:06:27.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4883 execpoddcmq6 -- /bin/sh -x -c nslookup nodeport-service' Mar 16 22:06:30.481: INFO: stderr: "I0316 22:06:30.391955 3285 log.go:172] (0xc000740a50) (0xc0006d8000) Create stream\nI0316 22:06:30.392008 3285 log.go:172] (0xc000740a50) (0xc0006d8000) Stream added, broadcasting: 1\nI0316 22:06:30.395262 3285 log.go:172] (0xc000740a50) Reply frame received for 1\nI0316 22:06:30.395292 3285 log.go:172] (0xc000740a50) (0xc000754000) Create stream\nI0316 22:06:30.395299 3285 log.go:172] (0xc000740a50) (0xc000754000) Stream added, broadcasting: 3\nI0316 22:06:30.396295 3285 log.go:172] (0xc000740a50) Reply frame received for 3\nI0316 22:06:30.396319 3285 log.go:172] (0xc000740a50) (0xc0007540a0) Create stream\nI0316 22:06:30.396328 3285 log.go:172] (0xc000740a50) (0xc0007540a0) Stream added, broadcasting: 5\nI0316 22:06:30.397540 3285 log.go:172] (0xc000740a50) Reply frame received for 5\nI0316 22:06:30.464326 3285 log.go:172] (0xc000740a50) Data frame received for 5\nI0316 22:06:30.464357 3285 log.go:172] (0xc0007540a0) (5) Data frame handling\nI0316 22:06:30.464376 3285 log.go:172] (0xc0007540a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0316 22:06:30.474154 3285 log.go:172] (0xc000740a50) Data frame received for 3\nI0316 22:06:30.474171 3285 log.go:172] (0xc000754000) (3) Data frame handling\nI0316 22:06:30.474184 3285 log.go:172] (0xc000754000) (3) Data frame sent\nI0316 22:06:30.475216 3285 log.go:172] (0xc000740a50) Data frame received for 3\nI0316 22:06:30.475233 3285 log.go:172] (0xc000754000) (3) Data frame handling\nI0316 22:06:30.475248 3285 log.go:172] (0xc000754000) (3) Data frame sent\nI0316 22:06:30.475762 3285 log.go:172] (0xc000740a50) Data frame received for 5\nI0316 22:06:30.475779 3285 log.go:172] (0xc0007540a0) (5) Data frame handling\nI0316 22:06:30.475808 3285 log.go:172] (0xc000740a50) Data frame received for 3\nI0316 22:06:30.475839 3285 log.go:172] (0xc000754000) (3) Data frame handling\nI0316 22:06:30.477548 3285 log.go:172] (0xc000740a50) Data frame received for 1\nI0316 22:06:30.477566 3285 log.go:172] (0xc0006d8000) (1) Data frame handling\nI0316 22:06:30.477577 3285 log.go:172] (0xc0006d8000) (1) Data frame sent\nI0316 22:06:30.477589 3285 log.go:172] (0xc000740a50) (0xc0006d8000) Stream removed, broadcasting: 1\nI0316 22:06:30.477766 3285 log.go:172] (0xc000740a50) Go away received\nI0316 22:06:30.477875 3285 log.go:172] (0xc000740a50) (0xc0006d8000) Stream removed, broadcasting: 1\nI0316 22:06:30.477890 3285 log.go:172] (0xc000740a50) (0xc000754000) Stream removed, broadcasting: 3\nI0316 22:06:30.477896 3285 log.go:172] (0xc000740a50) (0xc0007540a0) Stream removed, broadcasting: 5\n" Mar 16 22:06:30.481: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4883.svc.cluster.local\tcanonical name = externalsvc.services-4883.svc.cluster.local.\nName:\texternalsvc.services-4883.svc.cluster.local\nAddress: 10.104.97.27\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4883, will wait for the garbage collector to delete the pods Mar 16 22:06:30.560: INFO: Deleting ReplicationController externalsvc took: 25.565033ms Mar 16 22:06:30.860: INFO: Terminating ReplicationController externalsvc pods took: 300.208865ms Mar 16 22:06:39.614: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:06:39.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4883" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:22.400 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":223,"skipped":3845,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:06:39.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 16 22:06:39.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8140 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 16 22:06:42.716: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0316 22:06:42.653049 3315 log.go:172] (0xc0003d9550) (0xc0006b9ae0) Create stream\nI0316 22:06:42.653104 3315 log.go:172] (0xc0003d9550) (0xc0006b9ae0) Stream added, broadcasting: 1\nI0316 22:06:42.655581 3315 log.go:172] (0xc0003d9550) Reply frame received for 1\nI0316 22:06:42.655637 3315 log.go:172] (0xc0003d9550) (0xc0006b9b80) Create stream\nI0316 22:06:42.655655 3315 log.go:172] (0xc0003d9550) (0xc0006b9b80) Stream added, broadcasting: 3\nI0316 22:06:42.656483 3315 log.go:172] (0xc0003d9550) Reply frame received for 3\nI0316 22:06:42.656535 3315 log.go:172] (0xc0003d9550) (0xc00069a000) Create stream\nI0316 22:06:42.656556 3315 log.go:172] (0xc0003d9550) (0xc00069a000) Stream added, broadcasting: 5\nI0316 22:06:42.657594 3315 log.go:172] (0xc0003d9550) Reply frame received for 5\nI0316 22:06:42.657632 3315 log.go:172] (0xc0003d9550) (0xc0006ba000) Create stream\nI0316 22:06:42.657645 3315 log.go:172] (0xc0003d9550) (0xc0006ba000) Stream added, broadcasting: 7\nI0316 22:06:42.658362 3315 log.go:172] (0xc0003d9550) Reply frame received for 7\nI0316 22:06:42.658586 3315 log.go:172] (0xc0006b9b80) (3) Writing data frame\nI0316 22:06:42.658773 3315 log.go:172] (0xc0006b9b80) (3) Writing data frame\nI0316 22:06:42.659471 3315 log.go:172] (0xc0003d9550) Data frame received for 5\nI0316 22:06:42.659488 3315 log.go:172] (0xc00069a000) (5) Data frame handling\nI0316 22:06:42.659501 3315 log.go:172] (0xc00069a000) (5) Data frame sent\nI0316 22:06:42.660070 3315 log.go:172] (0xc0003d9550) Data frame received for 5\nI0316 22:06:42.660088 3315 log.go:172] (0xc00069a000) (5) Data frame handling\nI0316 22:06:42.660104 3315 log.go:172] (0xc00069a000) (5) Data frame sent\nI0316 22:06:42.695420 3315 log.go:172] (0xc0003d9550) Data frame received for 7\nI0316 22:06:42.695461 3315 log.go:172] (0xc0006ba000) (7) Data frame handling\nI0316 22:06:42.695671 3315 log.go:172] (0xc0003d9550) Data frame received for 5\nI0316 22:06:42.695693 3315 log.go:172] (0xc00069a000) (5) Data frame handling\nI0316 22:06:42.695750 3315 log.go:172] (0xc0003d9550) Data frame received for 1\nI0316 22:06:42.695764 3315 log.go:172] (0xc0006b9ae0) (1) Data frame handling\nI0316 22:06:42.695775 3315 log.go:172] (0xc0006b9ae0) (1) Data frame sent\nI0316 22:06:42.695879 3315 log.go:172] (0xc0003d9550) (0xc0006b9ae0) Stream removed, broadcasting: 1\nI0316 22:06:42.695981 3315 log.go:172] (0xc0003d9550) (0xc0006b9b80) Stream removed, broadcasting: 3\nI0316 22:06:42.696052 3315 log.go:172] (0xc0003d9550) Go away received\nI0316 22:06:42.696265 3315 log.go:172] (0xc0003d9550) (0xc0006b9ae0) Stream removed, broadcasting: 1\nI0316 22:06:42.696290 3315 log.go:172] (0xc0003d9550) (0xc0006b9b80) Stream removed, broadcasting: 3\nI0316 22:06:42.696302 3315 log.go:172] (0xc0003d9550) (0xc00069a000) Stream removed, broadcasting: 5\nI0316 22:06:42.696314 3315 log.go:172] (0xc0003d9550) (0xc0006ba000) Stream removed, broadcasting: 7\n" Mar 16 22:06:42.716: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:06:44.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8140" for this suite. • [SLOW TEST:5.090 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1944 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":224,"skipped":3849,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:06:44.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 22:06:45.546: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 22:06:47.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993205, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993205, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993205, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993205, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 22:06:50.579: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:06:50.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1239" for this suite. STEP: Destroying namespace "webhook-1239-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.102 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":225,"skipped":3850,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:06:50.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6694 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 16 22:06:50.936: INFO: Found 0 stateful pods, waiting for 3 Mar 16 22:07:00.941: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 22:07:00.941: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 22:07:00.941: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 16 22:07:00.969: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 16 22:07:11.044: INFO: Updating stateful set ss2 Mar 16 22:07:11.067: INFO: Waiting for Pod statefulset-6694/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 16 22:07:21.201: INFO: Found 2 stateful pods, waiting for 3 Mar 16 22:07:31.205: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 22:07:31.205: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 22:07:31.205: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 16 22:07:31.227: INFO: Updating stateful set ss2 Mar 16 22:07:31.285: INFO: Waiting for Pod statefulset-6694/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 16 22:07:41.308: INFO: Updating stateful set ss2 Mar 16 22:07:41.336: INFO: Waiting for StatefulSet statefulset-6694/ss2 to complete update Mar 16 22:07:41.336: INFO: Waiting for Pod statefulset-6694/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 16 22:07:51.360: INFO: Deleting all statefulset in ns statefulset-6694 Mar 16 22:07:51.363: INFO: Scaling statefulset ss2 to 0 Mar 16 22:08:01.393: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 22:08:01.396: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:08:01.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6694" for this suite. • [SLOW TEST:70.585 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":226,"skipped":3854,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:08:01.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 22:08:01.493: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 16 22:08:06.506: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 16 22:08:06.507: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 16 22:08:06.529: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7140 /apis/apps/v1/namespaces/deployment-7140/deployments/test-cleanup-deployment 1343cafb-71e0-4541-af04-b415acad5e94 334388 1 2020-03-16 22:08:06 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00477ae08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 16 22:08:06.559: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-7140 /apis/apps/v1/namespaces/deployment-7140/replicasets/test-cleanup-deployment-55ffc6b7b6 2aa3dd5a-0d8b-45df-8b52-f0821839bb97 334390 1 2020-03-16 22:08:06 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 1343cafb-71e0-4541-af04-b415acad5e94 0xc004830487 0xc004830488}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048304f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 22:08:06.559: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 16 22:08:06.560: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7140 /apis/apps/v1/namespaces/deployment-7140/replicasets/test-cleanup-controller 2be43922-5516-4997-80e9-86240a1eadeb 334389 1 2020-03-16 22:08:01 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 1343cafb-71e0-4541-af04-b415acad5e94 0xc0048303b7 0xc0048303b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004830418 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 16 22:08:06.597: INFO: Pod "test-cleanup-controller-vjvms" is available: &Pod{ObjectMeta:{test-cleanup-controller-vjvms test-cleanup-controller- deployment-7140 /api/v1/namespaces/deployment-7140/pods/test-cleanup-controller-vjvms 60a08f02-83d5-4ae2-a94b-b198ed76cd51 334375 0 2020-03-16 22:08:01 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 2be43922-5516-4997-80e9-86240a1eadeb 0xc00477b267 0xc00477b268}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-664bj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-664bj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-664bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 22:08:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 22:08:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 22:08:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 22:08:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.164,StartTime:2020-03-16 22:08:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 22:08:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e15441bd26a30a75cf6dd6b32de83006bd339d53b48e85d9c91c9d55b6c7c4ac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 22:08:06.597: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-fkrd5" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-fkrd5 test-cleanup-deployment-55ffc6b7b6- deployment-7140 /api/v1/namespaces/deployment-7140/pods/test-cleanup-deployment-55ffc6b7b6-fkrd5 f3cf26c5-5788-417f-aaf2-e0d9d7f4879f 334394 0 2020-03-16 22:08:06 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 2aa3dd5a-0d8b-45df-8b52-f0821839bb97 0xc00477b3e7 0xc00477b3e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-664bj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-664bj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-664bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 22:08:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:08:06.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7140" for this suite. • [SLOW TEST:5.231 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":227,"skipped":3894,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:08:06.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-1b96bfb3-8ce6-4e98-984d-4cdc2da3d11a STEP: Creating a pod to test consume configMaps Mar 16 22:08:06.758: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7d7c90d4-3e09-40a2-ae43-2a9bd8e75be2" in namespace "projected-5339" to be "success or failure" Mar 16 22:08:06.770: INFO: Pod "pod-projected-configmaps-7d7c90d4-3e09-40a2-ae43-2a9bd8e75be2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.381854ms Mar 16 22:08:08.774: INFO: Pod "pod-projected-configmaps-7d7c90d4-3e09-40a2-ae43-2a9bd8e75be2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016682116s Mar 16 22:08:10.779: INFO: Pod "pod-projected-configmaps-7d7c90d4-3e09-40a2-ae43-2a9bd8e75be2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020910146s STEP: Saw pod success Mar 16 22:08:10.779: INFO: Pod "pod-projected-configmaps-7d7c90d4-3e09-40a2-ae43-2a9bd8e75be2" satisfied condition "success or failure" Mar 16 22:08:10.782: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-7d7c90d4-3e09-40a2-ae43-2a9bd8e75be2 container projected-configmap-volume-test: STEP: delete the pod Mar 16 22:08:10.881: INFO: Waiting for pod pod-projected-configmaps-7d7c90d4-3e09-40a2-ae43-2a9bd8e75be2 to disappear Mar 16 22:08:10.908: INFO: Pod pod-projected-configmaps-7d7c90d4-3e09-40a2-ae43-2a9bd8e75be2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:08:10.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5339" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3896,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:08:10.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 22:08:11.017: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58ff60d0-896f-4f4c-b891-bfe9f068b783" in namespace "projected-9497" to be "success or failure" Mar 16 22:08:11.022: INFO: Pod "downwardapi-volume-58ff60d0-896f-4f4c-b891-bfe9f068b783": Phase="Pending", Reason="", readiness=false. Elapsed: 4.908188ms Mar 16 22:08:13.026: INFO: Pod "downwardapi-volume-58ff60d0-896f-4f4c-b891-bfe9f068b783": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009043761s Mar 16 22:08:15.035: INFO: Pod "downwardapi-volume-58ff60d0-896f-4f4c-b891-bfe9f068b783": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017936922s STEP: Saw pod success Mar 16 22:08:15.035: INFO: Pod "downwardapi-volume-58ff60d0-896f-4f4c-b891-bfe9f068b783" satisfied condition "success or failure" Mar 16 22:08:15.037: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-58ff60d0-896f-4f4c-b891-bfe9f068b783 container client-container: STEP: delete the pod Mar 16 22:08:15.069: INFO: Waiting for pod downwardapi-volume-58ff60d0-896f-4f4c-b891-bfe9f068b783 to disappear Mar 16 22:08:15.074: INFO: Pod downwardapi-volume-58ff60d0-896f-4f4c-b891-bfe9f068b783 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:08:15.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9497" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3909,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:08:15.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 22:08:15.169: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 16 22:08:15.189: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:15.203: INFO: Number of nodes with available pods: 0 Mar 16 22:08:15.203: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:08:16.207: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:16.210: INFO: Number of nodes with available pods: 0 Mar 16 22:08:16.210: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:08:17.209: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:17.213: INFO: Number of nodes with available pods: 0 Mar 16 22:08:17.213: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:08:18.208: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:18.211: INFO: Number of nodes with available pods: 0 Mar 16 22:08:18.211: INFO: Node jerma-worker is running more than one daemon pod Mar 16 22:08:19.209: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:19.215: INFO: Number of nodes with available pods: 2 Mar 16 22:08:19.215: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 16 22:08:19.238: INFO: Wrong image for pod: daemon-set-c7hzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:19.238: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:19.294: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:20.304: INFO: Wrong image for pod: daemon-set-c7hzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:20.304: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:20.308: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:21.298: INFO: Wrong image for pod: daemon-set-c7hzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:21.298: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:21.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:22.299: INFO: Wrong image for pod: daemon-set-c7hzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:22.299: INFO: Pod daemon-set-c7hzh is not available Mar 16 22:08:22.299: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:22.303: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:23.299: INFO: Wrong image for pod: daemon-set-c7hzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:23.299: INFO: Pod daemon-set-c7hzh is not available Mar 16 22:08:23.299: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:23.304: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:24.299: INFO: Wrong image for pod: daemon-set-c7hzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:24.299: INFO: Pod daemon-set-c7hzh is not available Mar 16 22:08:24.299: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:24.303: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:25.299: INFO: Wrong image for pod: daemon-set-c7hzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:25.299: INFO: Pod daemon-set-c7hzh is not available Mar 16 22:08:25.299: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:25.303: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:26.299: INFO: Wrong image for pod: daemon-set-c7hzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:26.299: INFO: Pod daemon-set-c7hzh is not available Mar 16 22:08:26.299: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:26.303: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:27.302: INFO: Wrong image for pod: daemon-set-c7hzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:27.302: INFO: Pod daemon-set-c7hzh is not available Mar 16 22:08:27.302: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:27.306: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:28.299: INFO: Wrong image for pod: daemon-set-c7hzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:28.299: INFO: Pod daemon-set-c7hzh is not available Mar 16 22:08:28.299: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:28.303: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:29.305: INFO: Pod daemon-set-gpqhv is not available Mar 16 22:08:29.305: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:29.308: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:30.298: INFO: Pod daemon-set-gpqhv is not available Mar 16 22:08:30.298: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:30.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:31.309: INFO: Pod daemon-set-gpqhv is not available Mar 16 22:08:31.309: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:31.316: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:32.339: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:32.343: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:33.299: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:33.303: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:34.299: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:34.299: INFO: Pod daemon-set-h8dzl is not available Mar 16 22:08:34.303: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:35.299: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:35.299: INFO: Pod daemon-set-h8dzl is not available Mar 16 22:08:35.303: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:36.303: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:36.303: INFO: Pod daemon-set-h8dzl is not available Mar 16 22:08:36.307: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:37.304: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:37.304: INFO: Pod daemon-set-h8dzl is not available Mar 16 22:08:37.308: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:38.299: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:38.299: INFO: Pod daemon-set-h8dzl is not available Mar 16 22:08:38.304: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:39.299: INFO: Wrong image for pod: daemon-set-h8dzl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 22:08:39.299: INFO: Pod daemon-set-h8dzl is not available Mar 16 22:08:39.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:40.299: INFO: Pod daemon-set-h7df8 is not available Mar 16 22:08:40.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 16 22:08:40.306: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:40.308: INFO: Number of nodes with available pods: 1 Mar 16 22:08:40.308: INFO: Node jerma-worker2 is running more than one daemon pod Mar 16 22:08:41.374: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:41.378: INFO: Number of nodes with available pods: 1 Mar 16 22:08:41.378: INFO: Node jerma-worker2 is running more than one daemon pod Mar 16 22:08:42.313: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:42.317: INFO: Number of nodes with available pods: 1 Mar 16 22:08:42.317: INFO: Node jerma-worker2 is running more than one daemon pod Mar 16 22:08:43.313: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 22:08:43.317: INFO: Number of nodes with available pods: 2 Mar 16 22:08:43.317: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2169, will wait for the garbage collector to delete the pods Mar 16 22:08:43.391: INFO: Deleting DaemonSet.extensions daemon-set took: 5.961432ms Mar 16 22:08:43.791: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.263918ms Mar 16 22:08:49.595: INFO: Number of nodes with available pods: 0 Mar 16 22:08:49.595: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 22:08:49.598: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2169/daemonsets","resourceVersion":"334737"},"items":null} Mar 16 22:08:49.601: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2169/pods","resourceVersion":"334737"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:08:49.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2169" for this suite. • [SLOW TEST:34.539 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":230,"skipped":3912,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:08:49.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 16 22:08:49.700: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix685559107/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:08:49.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8135" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":231,"skipped":3922,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:08:49.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 22:08:49.905: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e3aeda9-e386-4f4c-a01a-d71811adc721" in namespace "projected-6765" to be "success or failure" Mar 16 22:08:49.908: INFO: Pod "downwardapi-volume-0e3aeda9-e386-4f4c-a01a-d71811adc721": Phase="Pending", Reason="", readiness=false. Elapsed: 3.183792ms Mar 16 22:08:51.921: INFO: Pod "downwardapi-volume-0e3aeda9-e386-4f4c-a01a-d71811adc721": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016727423s Mar 16 22:08:53.925: INFO: Pod "downwardapi-volume-0e3aeda9-e386-4f4c-a01a-d71811adc721": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020584926s STEP: Saw pod success Mar 16 22:08:53.925: INFO: Pod "downwardapi-volume-0e3aeda9-e386-4f4c-a01a-d71811adc721" satisfied condition "success or failure" Mar 16 22:08:53.928: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0e3aeda9-e386-4f4c-a01a-d71811adc721 container client-container: STEP: delete the pod Mar 16 22:08:53.946: INFO: Waiting for pod downwardapi-volume-0e3aeda9-e386-4f4c-a01a-d71811adc721 to disappear Mar 16 22:08:53.986: INFO: Pod downwardapi-volume-0e3aeda9-e386-4f4c-a01a-d71811adc721 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:08:53.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6765" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3935,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:08:53.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d066ba84-46e6-4204-a1ce-fed37bca344b STEP: Creating a pod to test consume secrets Mar 16 22:08:54.055: INFO: Waiting up to 5m0s for pod "pod-secrets-a66c097d-7ba4-4ade-b5ac-21b5475bd983" in namespace "secrets-8840" to be "success or failure" Mar 16 22:08:54.065: INFO: Pod "pod-secrets-a66c097d-7ba4-4ade-b5ac-21b5475bd983": Phase="Pending", Reason="", readiness=false. Elapsed: 10.280915ms Mar 16 22:08:56.069: INFO: Pod "pod-secrets-a66c097d-7ba4-4ade-b5ac-21b5475bd983": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014092549s Mar 16 22:08:58.073: INFO: Pod "pod-secrets-a66c097d-7ba4-4ade-b5ac-21b5475bd983": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017704559s STEP: Saw pod success Mar 16 22:08:58.073: INFO: Pod "pod-secrets-a66c097d-7ba4-4ade-b5ac-21b5475bd983" satisfied condition "success or failure" Mar 16 22:08:58.076: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a66c097d-7ba4-4ade-b5ac-21b5475bd983 container secret-volume-test: STEP: delete the pod Mar 16 22:08:58.124: INFO: Waiting for pod pod-secrets-a66c097d-7ba4-4ade-b5ac-21b5475bd983 to disappear Mar 16 22:08:58.137: INFO: Pod pod-secrets-a66c097d-7ba4-4ade-b5ac-21b5475bd983 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:08:58.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8840" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:08:58.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-881b2981-b05c-4514-a711-e2922a11c92c STEP: Creating configMap with name cm-test-opt-upd-f386a405-e75a-40ba-9578-02d9f28daf4a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-881b2981-b05c-4514-a711-e2922a11c92c STEP: Updating configmap cm-test-opt-upd-f386a405-e75a-40ba-9578-02d9f28daf4a STEP: Creating configMap with name cm-test-opt-create-a89f6a41-ee1f-4d25-b6be-969a2f8092fc STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:10:30.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-967" for this suite. • [SLOW TEST:92.847 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":4015,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:10:30.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:10:37.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8261" for this suite. STEP: Destroying namespace "nsdeletetest-8608" for this suite. Mar 16 22:10:37.398: INFO: Namespace nsdeletetest-8608 was already deleted STEP: Destroying namespace "nsdeletetest-8083" for this suite. • [SLOW TEST:6.409 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":235,"skipped":4026,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:10:37.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 22:10:37.458: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9003433-d04d-4c2d-8306-725b9e28da4e" in namespace "downward-api-4252" to be "success or failure" Mar 16 22:10:37.491: INFO: Pod "downwardapi-volume-b9003433-d04d-4c2d-8306-725b9e28da4e": Phase="Pending", Reason="", readiness=false. Elapsed: 32.482774ms Mar 16 22:10:39.495: INFO: Pod "downwardapi-volume-b9003433-d04d-4c2d-8306-725b9e28da4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036792275s Mar 16 22:10:41.499: INFO: Pod "downwardapi-volume-b9003433-d04d-4c2d-8306-725b9e28da4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040993629s STEP: Saw pod success Mar 16 22:10:41.499: INFO: Pod "downwardapi-volume-b9003433-d04d-4c2d-8306-725b9e28da4e" satisfied condition "success or failure" Mar 16 22:10:41.503: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b9003433-d04d-4c2d-8306-725b9e28da4e container client-container: STEP: delete the pod Mar 16 22:10:41.565: INFO: Waiting for pod downwardapi-volume-b9003433-d04d-4c2d-8306-725b9e28da4e to disappear Mar 16 22:10:41.576: INFO: Pod downwardapi-volume-b9003433-d04d-4c2d-8306-725b9e28da4e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:10:41.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4252" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":4029,"failed":0} SSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:10:41.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9168, will wait for the garbage collector to delete the pods Mar 16 22:10:47.701: INFO: Deleting Job.batch foo took: 6.34527ms Mar 16 22:10:48.101: INFO: Terminating Job.batch foo pods took: 400.263617ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:11:29.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9168" for this suite. • [SLOW TEST:47.929 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":237,"skipped":4034,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:11:29.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 16 22:11:29.584: INFO: Waiting up to 5m0s for pod "pod-06cb81e5-3ab1-4310-953b-298249debe2f" in namespace "emptydir-2034" to be "success or failure" Mar 16 22:11:29.606: INFO: Pod "pod-06cb81e5-3ab1-4310-953b-298249debe2f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.399763ms Mar 16 22:11:31.610: INFO: Pod "pod-06cb81e5-3ab1-4310-953b-298249debe2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026012677s Mar 16 22:11:33.614: INFO: Pod "pod-06cb81e5-3ab1-4310-953b-298249debe2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030007281s STEP: Saw pod success Mar 16 22:11:33.614: INFO: Pod "pod-06cb81e5-3ab1-4310-953b-298249debe2f" satisfied condition "success or failure" Mar 16 22:11:33.617: INFO: Trying to get logs from node jerma-worker2 pod pod-06cb81e5-3ab1-4310-953b-298249debe2f container test-container: STEP: delete the pod Mar 16 22:11:33.653: INFO: Waiting for pod pod-06cb81e5-3ab1-4310-953b-298249debe2f to disappear Mar 16 22:11:33.679: INFO: Pod pod-06cb81e5-3ab1-4310-953b-298249debe2f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:11:33.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2034" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":4037,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:11:33.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0316 22:11:45.352922 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 22:11:45.352: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:11:45.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5466" for this suite. • [SLOW TEST:11.671 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":239,"skipped":4052,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:11:45.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 16 22:11:45.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4192' Mar 16 22:11:45.674: INFO: stderr: "" Mar 16 22:11:45.674: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 22:11:45.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4192' Mar 16 22:11:45.792: INFO: stderr: "" Mar 16 22:11:45.792: INFO: stdout: "update-demo-nautilus-swxq5 update-demo-nautilus-w6srt " Mar 16 22:11:45.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-swxq5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4192' Mar 16 22:11:45.887: INFO: stderr: "" Mar 16 22:11:45.887: INFO: stdout: "" Mar 16 22:11:45.887: INFO: update-demo-nautilus-swxq5 is created but not running Mar 16 22:11:50.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4192' Mar 16 22:11:51.018: INFO: stderr: "" Mar 16 22:11:51.018: INFO: stdout: "update-demo-nautilus-swxq5 update-demo-nautilus-w6srt " Mar 16 22:11:51.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-swxq5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4192' Mar 16 22:11:51.240: INFO: stderr: "" Mar 16 22:11:51.240: INFO: stdout: "true" Mar 16 22:11:51.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-swxq5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4192' Mar 16 22:11:51.865: INFO: stderr: "" Mar 16 22:11:51.865: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 22:11:51.866: INFO: validating pod update-demo-nautilus-swxq5 Mar 16 22:11:52.078: INFO: got data: { "image": "nautilus.jpg" } Mar 16 22:11:52.079: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 22:11:52.079: INFO: update-demo-nautilus-swxq5 is verified up and running Mar 16 22:11:52.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w6srt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4192' Mar 16 22:11:52.257: INFO: stderr: "" Mar 16 22:11:52.257: INFO: stdout: "true" Mar 16 22:11:52.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w6srt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4192' Mar 16 22:11:52.439: INFO: stderr: "" Mar 16 22:11:52.439: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 22:11:52.439: INFO: validating pod update-demo-nautilus-w6srt Mar 16 22:11:52.515: INFO: got data: { "image": "nautilus.jpg" } Mar 16 22:11:52.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 22:11:52.515: INFO: update-demo-nautilus-w6srt is verified up and running STEP: using delete to clean up resources Mar 16 22:11:52.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4192' Mar 16 22:11:52.659: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 22:11:52.659: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 16 22:11:52.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4192' Mar 16 22:11:52.780: INFO: stderr: "No resources found in kubectl-4192 namespace.\n" Mar 16 22:11:52.780: INFO: stdout: "" Mar 16 22:11:52.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4192 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 22:11:52.990: INFO: stderr: "" Mar 16 22:11:52.990: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:11:52.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4192" for this suite. • [SLOW TEST:7.826 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":240,"skipped":4060,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:11:53.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-a4313082-011a-4362-a1cc-f8ca386e6cbd [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:11:53.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-942" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":241,"skipped":4101,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:11:53.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-26691a4c-861c-418f-98e6-51ea815b408f STEP: Creating a pod to test consume configMaps Mar 16 22:11:53.759: INFO: Waiting up to 5m0s for pod "pod-configmaps-00095d6d-80dc-423d-925b-c701c162baef" in namespace "configmap-8595" to be "success or failure" Mar 16 22:11:53.770: INFO: Pod "pod-configmaps-00095d6d-80dc-423d-925b-c701c162baef": Phase="Pending", Reason="", readiness=false. Elapsed: 11.371392ms Mar 16 22:11:55.774: INFO: Pod "pod-configmaps-00095d6d-80dc-423d-925b-c701c162baef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015197418s Mar 16 22:11:57.778: INFO: Pod "pod-configmaps-00095d6d-80dc-423d-925b-c701c162baef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019295476s STEP: Saw pod success Mar 16 22:11:57.778: INFO: Pod "pod-configmaps-00095d6d-80dc-423d-925b-c701c162baef" satisfied condition "success or failure" Mar 16 22:11:57.782: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-00095d6d-80dc-423d-925b-c701c162baef container configmap-volume-test: STEP: delete the pod Mar 16 22:11:57.802: INFO: Waiting for pod pod-configmaps-00095d6d-80dc-423d-925b-c701c162baef to disappear Mar 16 22:11:57.806: INFO: Pod pod-configmaps-00095d6d-80dc-423d-925b-c701c162baef no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:11:57.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8595" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4103,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:11:57.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 16 22:11:57.854: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 16 22:11:58.669: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 16 22:12:00.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993518, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993518, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993518, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993518, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 22:12:03.540: INFO: Waited 628.16475ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:12:03.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4715" for this suite. • [SLOW TEST:6.261 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":243,"skipped":4109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:12:04.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 16 22:12:04.375: INFO: Waiting up to 5m0s for pod "pod-f1387067-e514-4234-a276-d23768792c01" in namespace "emptydir-2701" to be "success or failure" Mar 16 22:12:04.400: INFO: Pod "pod-f1387067-e514-4234-a276-d23768792c01": Phase="Pending", Reason="", readiness=false. Elapsed: 24.536352ms Mar 16 22:12:06.404: INFO: Pod "pod-f1387067-e514-4234-a276-d23768792c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029029663s Mar 16 22:12:08.408: INFO: Pod "pod-f1387067-e514-4234-a276-d23768792c01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032620871s STEP: Saw pod success Mar 16 22:12:08.408: INFO: Pod "pod-f1387067-e514-4234-a276-d23768792c01" satisfied condition "success or failure" Mar 16 22:12:08.411: INFO: Trying to get logs from node jerma-worker2 pod pod-f1387067-e514-4234-a276-d23768792c01 container test-container: STEP: delete the pod Mar 16 22:12:08.431: INFO: Waiting for pod pod-f1387067-e514-4234-a276-d23768792c01 to disappear Mar 16 22:12:08.435: INFO: Pod pod-f1387067-e514-4234-a276-d23768792c01 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:12:08.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2701" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4167,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:12:08.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:12:24.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1991" for this suite. • [SLOW TEST:16.279 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":245,"skipped":4175,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:12:24.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-417dfbb5-6d36-41a0-9a29-366ae9b458b1 STEP: Creating a pod to test consume configMaps Mar 16 22:12:24.803: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c03ae61d-a290-4566-a505-c13e8e6cacb6" in namespace "projected-6811" to be "success or failure" Mar 16 22:12:24.846: INFO: Pod "pod-projected-configmaps-c03ae61d-a290-4566-a505-c13e8e6cacb6": Phase="Pending", Reason="", readiness=false. Elapsed: 43.54038ms Mar 16 22:12:26.849: INFO: Pod "pod-projected-configmaps-c03ae61d-a290-4566-a505-c13e8e6cacb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046265887s Mar 16 22:12:28.864: INFO: Pod "pod-projected-configmaps-c03ae61d-a290-4566-a505-c13e8e6cacb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061170911s STEP: Saw pod success Mar 16 22:12:28.864: INFO: Pod "pod-projected-configmaps-c03ae61d-a290-4566-a505-c13e8e6cacb6" satisfied condition "success or failure" Mar 16 22:12:28.867: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-c03ae61d-a290-4566-a505-c13e8e6cacb6 container projected-configmap-volume-test: STEP: delete the pod Mar 16 22:12:28.916: INFO: Waiting for pod pod-projected-configmaps-c03ae61d-a290-4566-a505-c13e8e6cacb6 to disappear Mar 16 22:12:28.921: INFO: Pod pod-projected-configmaps-c03ae61d-a290-4566-a505-c13e8e6cacb6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:12:28.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6811" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4177,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:12:28.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 22:12:29.550: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 22:12:31.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993549, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993549, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993549, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993549, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 22:12:34.589: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 16 22:12:38.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-427 to-be-attached-pod -i -c=container1' Mar 16 22:12:38.846: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:12:38.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-427" for this suite. STEP: Destroying namespace "webhook-427-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.031 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":247,"skipped":4183,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:12:38.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 16 22:12:39.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-748' Mar 16 22:12:39.118: INFO: stderr: "" Mar 16 22:12:39.118: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 16 22:12:44.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-748 -o json' Mar 16 22:12:44.279: INFO: stderr: "" Mar 16 22:12:44.279: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-16T22:12:39Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-748\",\n \"resourceVersion\": \"336201\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-748/pods/e2e-test-httpd-pod\",\n \"uid\": \"994424f8-8dbe-4124-ba3d-f8129cba52b3\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-h665z\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-h665z\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-h665z\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T22:12:39Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T22:12:41Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T22:12:41Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T22:12:39Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://341a5028c4fa660807be5f5a3b9adf0da8a54952009709e557abf6680740c3e9\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-16T22:12:41Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.184\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.184\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-16T22:12:39Z\"\n }\n}\n" STEP: replace the image in the pod Mar 16 22:12:44.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-748' Mar 16 22:12:44.730: INFO: stderr: "" Mar 16 22:12:44.730: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902 Mar 16 22:12:44.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-748' Mar 16 22:12:59.522: INFO: stderr: "" Mar 16 22:12:59.523: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:12:59.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-748" for this suite. • [SLOW TEST:20.572 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":248,"skipped":4192,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:12:59.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 22:12:59.594: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be47f1c8-8a11-4d67-b1bd-163632a9c61a" in namespace "downward-api-38" to be "success or failure" Mar 16 22:12:59.598: INFO: Pod "downwardapi-volume-be47f1c8-8a11-4d67-b1bd-163632a9c61a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.549713ms Mar 16 22:13:01.603: INFO: Pod "downwardapi-volume-be47f1c8-8a11-4d67-b1bd-163632a9c61a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008249921s Mar 16 22:13:03.608: INFO: Pod "downwardapi-volume-be47f1c8-8a11-4d67-b1bd-163632a9c61a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013432268s STEP: Saw pod success Mar 16 22:13:03.608: INFO: Pod "downwardapi-volume-be47f1c8-8a11-4d67-b1bd-163632a9c61a" satisfied condition "success or failure" Mar 16 22:13:03.611: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-be47f1c8-8a11-4d67-b1bd-163632a9c61a container client-container: STEP: delete the pod Mar 16 22:13:03.644: INFO: Waiting for pod downwardapi-volume-be47f1c8-8a11-4d67-b1bd-163632a9c61a to disappear Mar 16 22:13:03.652: INFO: Pod downwardapi-volume-be47f1c8-8a11-4d67-b1bd-163632a9c61a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:13:03.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-38" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4198,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:13:03.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-f815d3fb-72e1-47a1-832c-41e56dd9f5a7 STEP: Creating a pod to test consume configMaps Mar 16 22:13:03.769: INFO: Waiting up to 5m0s for pod "pod-configmaps-c545e415-5f4f-4e0f-ba35-fac4c8f6749d" in namespace "configmap-6917" to be "success or failure" Mar 16 22:13:03.772: INFO: Pod "pod-configmaps-c545e415-5f4f-4e0f-ba35-fac4c8f6749d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.928678ms Mar 16 22:13:05.775: INFO: Pod "pod-configmaps-c545e415-5f4f-4e0f-ba35-fac4c8f6749d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00604906s Mar 16 22:13:07.779: INFO: Pod "pod-configmaps-c545e415-5f4f-4e0f-ba35-fac4c8f6749d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010329166s STEP: Saw pod success Mar 16 22:13:07.779: INFO: Pod "pod-configmaps-c545e415-5f4f-4e0f-ba35-fac4c8f6749d" satisfied condition "success or failure" Mar 16 22:13:07.783: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c545e415-5f4f-4e0f-ba35-fac4c8f6749d container configmap-volume-test: STEP: delete the pod Mar 16 22:13:07.812: INFO: Waiting for pod pod-configmaps-c545e415-5f4f-4e0f-ba35-fac4c8f6749d to disappear Mar 16 22:13:07.826: INFO: Pod pod-configmaps-c545e415-5f4f-4e0f-ba35-fac4c8f6749d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:13:07.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6917" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4204,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:13:07.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 22:13:07.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b3b87b4-0498-4e41-a2e2-f5144d1077eb" in namespace "downward-api-3350" to be "success or failure" Mar 16 22:13:07.916: INFO: Pod "downwardapi-volume-1b3b87b4-0498-4e41-a2e2-f5144d1077eb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.243545ms Mar 16 22:13:09.920: INFO: Pod "downwardapi-volume-1b3b87b4-0498-4e41-a2e2-f5144d1077eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011596882s Mar 16 22:13:11.924: INFO: Pod "downwardapi-volume-1b3b87b4-0498-4e41-a2e2-f5144d1077eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015343567s STEP: Saw pod success Mar 16 22:13:11.924: INFO: Pod "downwardapi-volume-1b3b87b4-0498-4e41-a2e2-f5144d1077eb" satisfied condition "success or failure" Mar 16 22:13:11.927: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1b3b87b4-0498-4e41-a2e2-f5144d1077eb container client-container: STEP: delete the pod Mar 16 22:13:11.960: INFO: Waiting for pod downwardapi-volume-1b3b87b4-0498-4e41-a2e2-f5144d1077eb to disappear Mar 16 22:13:11.970: INFO: Pod downwardapi-volume-1b3b87b4-0498-4e41-a2e2-f5144d1077eb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:13:11.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3350" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4222,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:13:11.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:13:17.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7152" for this suite. • [SLOW TEST:5.106 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":252,"skipped":4227,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:13:17.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:13:33.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7694" for this suite. • [SLOW TEST:16.270 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":253,"skipped":4228,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:13:33.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 16 22:13:33.401: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 22:13:33.421: INFO: Waiting for terminating namespaces to be deleted... Mar 16 22:13:33.424: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 16 22:13:33.430: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 22:13:33.430: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 22:13:33.430: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 22:13:33.430: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 22:13:33.430: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 16 22:13:33.434: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 22:13:33.434: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 22:13:33.434: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 16 22:13:33.434: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fce881e20ebdac], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:13:34.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2622" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":254,"skipped":4241,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:13:34.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 16 22:13:34.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1744' Mar 16 22:13:34.640: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 16 22:13:34.640: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602 Mar 16 22:13:36.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1744' Mar 16 22:13:36.761: INFO: stderr: "" Mar 16 22:13:36.761: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:13:36.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1744" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":255,"skipped":4263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:13:36.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-d0a29b43-e10f-4701-91f9-ebce0521f3a7 STEP: Creating configMap with name cm-test-opt-upd-7596f16f-83fb-4006-9961-f576824d1387 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d0a29b43-e10f-4701-91f9-ebce0521f3a7 STEP: Updating configmap cm-test-opt-upd-7596f16f-83fb-4006-9961-f576824d1387 STEP: Creating configMap with name cm-test-opt-create-15a1acc2-cd6d-42eb-a351-712bc2c6aa12 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:14:47.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5935" for this suite. • [SLOW TEST:70.716 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4295,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:14:47.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5407.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5407.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 22:14:53.602: INFO: DNS probes using dns-test-92caa323-1480-4796-a5c4-a0ce971ff135 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5407.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5407.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 22:14:59.848: INFO: File wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local from pod dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 22:14:59.851: INFO: File jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local from pod dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 22:14:59.851: INFO: Lookups using dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 failed for: [wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local] Mar 16 22:15:04.856: INFO: File wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local from pod dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 22:15:04.860: INFO: File jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local from pod dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 22:15:04.860: INFO: Lookups using dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 failed for: [wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local] Mar 16 22:15:09.856: INFO: File wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local from pod dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 22:15:09.859: INFO: File jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local from pod dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 22:15:09.859: INFO: Lookups using dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 failed for: [wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local] Mar 16 22:15:14.856: INFO: File wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local from pod dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 22:15:14.860: INFO: File jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local from pod dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 22:15:14.860: INFO: Lookups using dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 failed for: [wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local] Mar 16 22:15:19.856: INFO: File wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local from pod dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 22:15:19.859: INFO: File jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local from pod dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 22:15:19.859: INFO: Lookups using dns-5407/dns-test-825b0676-fb19-474f-86c3-f3715a77e522 failed for: [wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local] Mar 16 22:15:24.860: INFO: DNS probes using dns-test-825b0676-fb19-474f-86c3-f3715a77e522 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5407.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5407.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5407.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5407.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 22:15:33.306: INFO: DNS probes using dns-test-88ae07c0-025c-4644-8df0-0f59bb4ddfa3 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:15:33.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5407" for this suite. • [SLOW TEST:45.900 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":257,"skipped":4316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:15:33.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 22:15:36.724: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:15:36.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4179" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4361,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:15:36.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:15:49.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3310" for this suite. • [SLOW TEST:13.202 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":259,"skipped":4380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:15:49.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2245 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 22:15:50.052: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 16 22:16:16.161: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.191:8080/dial?request=hostname&protocol=udp&host=10.244.1.164&port=8081&tries=1'] Namespace:pod-network-test-2245 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 22:16:16.161: INFO: >>> kubeConfig: /root/.kube/config I0316 22:16:16.189768 6 log.go:172] (0xc005256580) (0xc0012b2aa0) Create stream I0316 22:16:16.189803 6 log.go:172] (0xc005256580) (0xc0012b2aa0) Stream added, broadcasting: 1 I0316 22:16:16.191701 6 log.go:172] (0xc005256580) Reply frame received for 1 I0316 22:16:16.191742 6 log.go:172] (0xc005256580) (0xc001a200a0) Create stream I0316 22:16:16.191756 6 log.go:172] (0xc005256580) (0xc001a200a0) Stream added, broadcasting: 3 I0316 22:16:16.192824 6 log.go:172] (0xc005256580) Reply frame received for 3 I0316 22:16:16.192848 6 log.go:172] (0xc005256580) (0xc0012b2b40) Create stream I0316 22:16:16.192856 6 log.go:172] (0xc005256580) (0xc0012b2b40) Stream added, broadcasting: 5 I0316 22:16:16.194009 6 log.go:172] (0xc005256580) Reply frame received for 5 I0316 22:16:16.267087 6 log.go:172] (0xc005256580) Data frame received for 3 I0316 22:16:16.267112 6 log.go:172] (0xc001a200a0) (3) Data frame handling I0316 22:16:16.267128 6 log.go:172] (0xc001a200a0) (3) Data frame sent I0316 22:16:16.267954 6 log.go:172] (0xc005256580) Data frame received for 3 I0316 22:16:16.267984 6 log.go:172] (0xc001a200a0) (3) Data frame handling I0316 22:16:16.268188 6 log.go:172] (0xc005256580) Data frame received for 5 I0316 22:16:16.268208 6 log.go:172] (0xc0012b2b40) (5) Data frame handling I0316 22:16:16.270420 6 log.go:172] (0xc005256580) Data frame received for 1 I0316 22:16:16.270459 6 log.go:172] (0xc0012b2aa0) (1) Data frame handling I0316 22:16:16.270493 6 log.go:172] (0xc0012b2aa0) (1) Data frame sent I0316 22:16:16.270517 6 log.go:172] (0xc005256580) (0xc0012b2aa0) Stream removed, broadcasting: 1 I0316 22:16:16.270541 6 log.go:172] (0xc005256580) Go away received I0316 22:16:16.270672 6 log.go:172] (0xc005256580) (0xc0012b2aa0) Stream removed, broadcasting: 1 I0316 22:16:16.270699 6 log.go:172] (0xc005256580) (0xc001a200a0) Stream removed, broadcasting: 3 I0316 22:16:16.270718 6 log.go:172] (0xc005256580) (0xc0012b2b40) Stream removed, broadcasting: 5 Mar 16 22:16:16.270: INFO: Waiting for responses: map[] Mar 16 22:16:16.274: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.191:8080/dial?request=hostname&protocol=udp&host=10.244.2.190&port=8081&tries=1'] Namespace:pod-network-test-2245 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 22:16:16.274: INFO: >>> kubeConfig: /root/.kube/config I0316 22:16:16.310203 6 log.go:172] (0xc002ad0370) (0xc001a20820) Create stream I0316 22:16:16.310250 6 log.go:172] (0xc002ad0370) (0xc001a20820) Stream added, broadcasting: 1 I0316 22:16:16.317328 6 log.go:172] (0xc002ad0370) Reply frame received for 1 I0316 22:16:16.317370 6 log.go:172] (0xc002ad0370) (0xc001a208c0) Create stream I0316 22:16:16.317385 6 log.go:172] (0xc002ad0370) (0xc001a208c0) Stream added, broadcasting: 3 I0316 22:16:16.318569 6 log.go:172] (0xc002ad0370) Reply frame received for 3 I0316 22:16:16.318615 6 log.go:172] (0xc002ad0370) (0xc001a20b40) Create stream I0316 22:16:16.318637 6 log.go:172] (0xc002ad0370) (0xc001a20b40) Stream added, broadcasting: 5 I0316 22:16:16.319755 6 log.go:172] (0xc002ad0370) Reply frame received for 5 I0316 22:16:16.412742 6 log.go:172] (0xc002ad0370) Data frame received for 3 I0316 22:16:16.412785 6 log.go:172] (0xc001a208c0) (3) Data frame handling I0316 22:16:16.412811 6 log.go:172] (0xc001a208c0) (3) Data frame sent I0316 22:16:16.413752 6 log.go:172] (0xc002ad0370) Data frame received for 3 I0316 22:16:16.413776 6 log.go:172] (0xc001a208c0) (3) Data frame handling I0316 22:16:16.413800 6 log.go:172] (0xc002ad0370) Data frame received for 5 I0316 22:16:16.413814 6 log.go:172] (0xc001a20b40) (5) Data frame handling I0316 22:16:16.415854 6 log.go:172] (0xc002ad0370) Data frame received for 1 I0316 22:16:16.415888 6 log.go:172] (0xc001a20820) (1) Data frame handling I0316 22:16:16.415919 6 log.go:172] (0xc001a20820) (1) Data frame sent I0316 22:16:16.415948 6 log.go:172] (0xc002ad0370) (0xc001a20820) Stream removed, broadcasting: 1 I0316 22:16:16.415979 6 log.go:172] (0xc002ad0370) Go away received I0316 22:16:16.416053 6 log.go:172] (0xc002ad0370) (0xc001a20820) Stream removed, broadcasting: 1 I0316 22:16:16.416070 6 log.go:172] (0xc002ad0370) (0xc001a208c0) Stream removed, broadcasting: 3 I0316 22:16:16.416081 6 log.go:172] (0xc002ad0370) (0xc001a20b40) Stream removed, broadcasting: 5 Mar 16 22:16:16.416: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:16:16.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2245" for this suite. • [SLOW TEST:26.426 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:16:16.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-f55b2d6e-c14f-4c97-a966-4478f6d1653d STEP: Creating a pod to test consume secrets Mar 16 22:16:16.498: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-68a67e2c-835f-4335-a095-33ec692e2f83" in namespace "projected-4196" to be "success or failure" Mar 16 22:16:16.521: INFO: Pod "pod-projected-secrets-68a67e2c-835f-4335-a095-33ec692e2f83": Phase="Pending", Reason="", readiness=false. Elapsed: 22.585191ms Mar 16 22:16:18.525: INFO: Pod "pod-projected-secrets-68a67e2c-835f-4335-a095-33ec692e2f83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02711943s Mar 16 22:16:20.530: INFO: Pod "pod-projected-secrets-68a67e2c-835f-4335-a095-33ec692e2f83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031223391s STEP: Saw pod success Mar 16 22:16:20.530: INFO: Pod "pod-projected-secrets-68a67e2c-835f-4335-a095-33ec692e2f83" satisfied condition "success or failure" Mar 16 22:16:20.532: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-68a67e2c-835f-4335-a095-33ec692e2f83 container projected-secret-volume-test: STEP: delete the pod Mar 16 22:16:20.561: INFO: Waiting for pod pod-projected-secrets-68a67e2c-835f-4335-a095-33ec692e2f83 to disappear Mar 16 22:16:20.566: INFO: Pod pod-projected-secrets-68a67e2c-835f-4335-a095-33ec692e2f83 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:16:20.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4196" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4428,"failed":0} SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:16:20.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 22:16:26.794: INFO: Waiting up to 5m0s for pod "client-envvars-26ac1772-cab2-479a-a832-2797a7f3c529" in namespace "pods-5792" to be "success or failure" Mar 16 22:16:26.800: INFO: Pod "client-envvars-26ac1772-cab2-479a-a832-2797a7f3c529": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114349ms Mar 16 22:16:28.805: INFO: Pod "client-envvars-26ac1772-cab2-479a-a832-2797a7f3c529": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011051077s Mar 16 22:16:31.008: INFO: Pod "client-envvars-26ac1772-cab2-479a-a832-2797a7f3c529": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2134498s STEP: Saw pod success Mar 16 22:16:31.008: INFO: Pod "client-envvars-26ac1772-cab2-479a-a832-2797a7f3c529" satisfied condition "success or failure" Mar 16 22:16:31.057: INFO: Trying to get logs from node jerma-worker pod client-envvars-26ac1772-cab2-479a-a832-2797a7f3c529 container env3cont: STEP: delete the pod Mar 16 22:16:31.214: INFO: Waiting for pod client-envvars-26ac1772-cab2-479a-a832-2797a7f3c529 to disappear Mar 16 22:16:31.225: INFO: Pod client-envvars-26ac1772-cab2-479a-a832-2797a7f3c529 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:16:31.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5792" for this suite. • [SLOW TEST:10.659 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4431,"failed":0} [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:16:31.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:16:35.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-388" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":263,"skipped":4431,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:16:35.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-5zjj STEP: Creating a pod to test atomic-volume-subpath Mar 16 22:16:35.532: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5zjj" in namespace "subpath-7910" to be "success or failure" Mar 16 22:16:35.707: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Pending", Reason="", readiness=false. Elapsed: 174.925817ms Mar 16 22:16:37.711: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179509905s Mar 16 22:16:39.715: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Running", Reason="", readiness=true. Elapsed: 4.183044823s Mar 16 22:16:41.719: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Running", Reason="", readiness=true. Elapsed: 6.187404215s Mar 16 22:16:43.723: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Running", Reason="", readiness=true. Elapsed: 8.191328951s Mar 16 22:16:45.727: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Running", Reason="", readiness=true. Elapsed: 10.195717815s Mar 16 22:16:47.731: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Running", Reason="", readiness=true. Elapsed: 12.199471878s Mar 16 22:16:49.735: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Running", Reason="", readiness=true. Elapsed: 14.20343869s Mar 16 22:16:51.739: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Running", Reason="", readiness=true. Elapsed: 16.207275864s Mar 16 22:16:53.743: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Running", Reason="", readiness=true. Elapsed: 18.211139412s Mar 16 22:16:55.746: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Running", Reason="", readiness=true. Elapsed: 20.214752179s Mar 16 22:16:57.749: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Running", Reason="", readiness=true. Elapsed: 22.21781154s Mar 16 22:16:59.791: INFO: Pod "pod-subpath-test-projected-5zjj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.259787077s STEP: Saw pod success Mar 16 22:16:59.791: INFO: Pod "pod-subpath-test-projected-5zjj" satisfied condition "success or failure" Mar 16 22:16:59.795: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-5zjj container test-container-subpath-projected-5zjj: STEP: delete the pod Mar 16 22:16:59.826: INFO: Waiting for pod pod-subpath-test-projected-5zjj to disappear Mar 16 22:16:59.831: INFO: Pod pod-subpath-test-projected-5zjj no longer exists STEP: Deleting pod pod-subpath-test-projected-5zjj Mar 16 22:16:59.831: INFO: Deleting pod "pod-subpath-test-projected-5zjj" in namespace "subpath-7910" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:16:59.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7910" for this suite. • [SLOW TEST:24.429 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":264,"skipped":4431,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:16:59.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 16 22:16:59.924: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5238 /api/v1/namespaces/watch-5238/configmaps/e2e-watch-test-watch-closed e8ca73d1-3408-4733-b295-113a0ef3f1ff 337604 0 2020-03-16 22:16:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 16 22:16:59.924: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5238 /api/v1/namespaces/watch-5238/configmaps/e2e-watch-test-watch-closed e8ca73d1-3408-4733-b295-113a0ef3f1ff 337605 0 2020-03-16 22:16:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 16 22:16:59.936: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5238 /api/v1/namespaces/watch-5238/configmaps/e2e-watch-test-watch-closed e8ca73d1-3408-4733-b295-113a0ef3f1ff 337606 0 2020-03-16 22:16:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 16 22:16:59.936: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5238 /api/v1/namespaces/watch-5238/configmaps/e2e-watch-test-watch-closed e8ca73d1-3408-4733-b295-113a0ef3f1ff 337607 0 2020-03-16 22:16:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:16:59.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5238" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":265,"skipped":4438,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:16:59.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 16 22:16:59.994: INFO: PodSpec: initContainers in spec.initContainers Mar 16 22:17:48.017: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e546d379-d181-4b13-8ad4-d7bc2e521b4c", GenerateName:"", Namespace:"init-container-1115", SelfLink:"/api/v1/namespaces/init-container-1115/pods/pod-init-e546d379-d181-4b13-8ad4-d7bc2e521b4c", UID:"e4f491d2-3781-401b-b497-ce5ae543c427", ResourceVersion:"337795", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719993819, loc:(*time.Location)(0x7d83a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"994513883"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-skbzb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0024946c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-skbzb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-skbzb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-skbzb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0043d1618), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002af9ce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0043d16a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0043d16d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0043d16d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0043d16dc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993820, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993820, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993820, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719993820, loc:(*time.Location)(0x7d83a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.166", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.166"}}, StartTime:(*v1.Time)(0xc002bfd180), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000940460)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009404d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://9486fb7179709afcb84068544e5200b2febabf9802ea54c8b7b681f0f06fd16a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bfd1c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bfd1a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0043d176f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:17:48.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1115" for this suite. • [SLOW TEST:48.084 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":266,"skipped":4449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:17:48.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 16 22:17:48.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a33a028-728e-4c92-a4c9-574c835c1231" in namespace "projected-7017" to be "success or failure" Mar 16 22:17:48.186: INFO: Pod "downwardapi-volume-7a33a028-728e-4c92-a4c9-574c835c1231": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24556ms Mar 16 22:17:50.205: INFO: Pod "downwardapi-volume-7a33a028-728e-4c92-a4c9-574c835c1231": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027214495s Mar 16 22:17:52.210: INFO: Pod "downwardapi-volume-7a33a028-728e-4c92-a4c9-574c835c1231": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032535089s STEP: Saw pod success Mar 16 22:17:52.210: INFO: Pod "downwardapi-volume-7a33a028-728e-4c92-a4c9-574c835c1231" satisfied condition "success or failure" Mar 16 22:17:52.212: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7a33a028-728e-4c92-a4c9-574c835c1231 container client-container: STEP: delete the pod Mar 16 22:17:52.229: INFO: Waiting for pod downwardapi-volume-7a33a028-728e-4c92-a4c9-574c835c1231 to disappear Mar 16 22:17:52.233: INFO: Pod downwardapi-volume-7a33a028-728e-4c92-a4c9-574c835c1231 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:17:52.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7017" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4477,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:17:52.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:17:57.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6491" for this suite. • [SLOW TEST:5.306 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":268,"skipped":4494,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:17:57.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-534 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 22:17:57.636: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 16 22:18:21.750: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.198:8080/dial?request=hostname&protocol=http&host=10.244.1.167&port=8080&tries=1'] Namespace:pod-network-test-534 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 22:18:21.751: INFO: >>> kubeConfig: /root/.kube/config I0316 22:18:21.789501 6 log.go:172] (0xc002715600) (0xc001e80c80) Create stream I0316 22:18:21.789543 6 log.go:172] (0xc002715600) (0xc001e80c80) Stream added, broadcasting: 1 I0316 22:18:21.791404 6 log.go:172] (0xc002715600) Reply frame received for 1 I0316 22:18:21.791431 6 log.go:172] (0xc002715600) (0xc00195cdc0) Create stream I0316 22:18:21.791443 6 log.go:172] (0xc002715600) (0xc00195cdc0) Stream added, broadcasting: 3 I0316 22:18:21.792384 6 log.go:172] (0xc002715600) Reply frame received for 3 I0316 22:18:21.792404 6 log.go:172] (0xc002715600) (0xc001e80d20) Create stream I0316 22:18:21.792410 6 log.go:172] (0xc002715600) (0xc001e80d20) Stream added, broadcasting: 5 I0316 22:18:21.793746 6 log.go:172] (0xc002715600) Reply frame received for 5 I0316 22:18:21.883719 6 log.go:172] (0xc002715600) Data frame received for 3 I0316 22:18:21.883761 6 log.go:172] (0xc00195cdc0) (3) Data frame handling I0316 22:18:21.883782 6 log.go:172] (0xc00195cdc0) (3) Data frame sent I0316 22:18:21.884248 6 log.go:172] (0xc002715600) Data frame received for 3 I0316 22:18:21.884287 6 log.go:172] (0xc00195cdc0) (3) Data frame handling I0316 22:18:21.884410 6 log.go:172] (0xc002715600) Data frame received for 5 I0316 22:18:21.884493 6 log.go:172] (0xc001e80d20) (5) Data frame handling I0316 22:18:21.887357 6 log.go:172] (0xc002715600) Data frame received for 1 I0316 22:18:21.887392 6 log.go:172] (0xc001e80c80) (1) Data frame handling I0316 22:18:21.887427 6 log.go:172] (0xc001e80c80) (1) Data frame sent I0316 22:18:21.887454 6 log.go:172] (0xc002715600) (0xc001e80c80) Stream removed, broadcasting: 1 I0316 22:18:21.887487 6 log.go:172] (0xc002715600) Go away received I0316 22:18:21.887640 6 log.go:172] (0xc002715600) (0xc001e80c80) Stream removed, broadcasting: 1 I0316 22:18:21.887667 6 log.go:172] (0xc002715600) (0xc00195cdc0) Stream removed, broadcasting: 3 I0316 22:18:21.887690 6 log.go:172] (0xc002715600) (0xc001e80d20) Stream removed, broadcasting: 5 Mar 16 22:18:21.887: INFO: Waiting for responses: map[] Mar 16 22:18:21.893: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.198:8080/dial?request=hostname&protocol=http&host=10.244.2.197&port=8080&tries=1'] Namespace:pod-network-test-534 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 22:18:21.893: INFO: >>> kubeConfig: /root/.kube/config I0316 22:18:21.923105 6 log.go:172] (0xc002ad0b00) (0xc001c57540) Create stream I0316 22:18:21.923136 6 log.go:172] (0xc002ad0b00) (0xc001c57540) Stream added, broadcasting: 1 I0316 22:18:21.925213 6 log.go:172] (0xc002ad0b00) Reply frame received for 1 I0316 22:18:21.925251 6 log.go:172] (0xc002ad0b00) (0xc001c575e0) Create stream I0316 22:18:21.925258 6 log.go:172] (0xc002ad0b00) (0xc001c575e0) Stream added, broadcasting: 3 I0316 22:18:21.926395 6 log.go:172] (0xc002ad0b00) Reply frame received for 3 I0316 22:18:21.926441 6 log.go:172] (0xc002ad0b00) (0xc001e80e60) Create stream I0316 22:18:21.926462 6 log.go:172] (0xc002ad0b00) (0xc001e80e60) Stream added, broadcasting: 5 I0316 22:18:21.927678 6 log.go:172] (0xc002ad0b00) Reply frame received for 5 I0316 22:18:21.991533 6 log.go:172] (0xc002ad0b00) Data frame received for 3 I0316 22:18:21.991554 6 log.go:172] (0xc001c575e0) (3) Data frame handling I0316 22:18:21.991565 6 log.go:172] (0xc001c575e0) (3) Data frame sent I0316 22:18:21.992448 6 log.go:172] (0xc002ad0b00) Data frame received for 5 I0316 22:18:21.992473 6 log.go:172] (0xc001e80e60) (5) Data frame handling I0316 22:18:21.992642 6 log.go:172] (0xc002ad0b00) Data frame received for 3 I0316 22:18:21.992688 6 log.go:172] (0xc001c575e0) (3) Data frame handling I0316 22:18:21.994400 6 log.go:172] (0xc002ad0b00) Data frame received for 1 I0316 22:18:21.994439 6 log.go:172] (0xc001c57540) (1) Data frame handling I0316 22:18:21.994476 6 log.go:172] (0xc001c57540) (1) Data frame sent I0316 22:18:21.994499 6 log.go:172] (0xc002ad0b00) (0xc001c57540) Stream removed, broadcasting: 1 I0316 22:18:21.994536 6 log.go:172] (0xc002ad0b00) Go away received I0316 22:18:21.994670 6 log.go:172] (0xc002ad0b00) (0xc001c57540) Stream removed, broadcasting: 1 I0316 22:18:21.994687 6 log.go:172] (0xc002ad0b00) (0xc001c575e0) Stream removed, broadcasting: 3 I0316 22:18:21.994697 6 log.go:172] (0xc002ad0b00) (0xc001e80e60) Stream removed, broadcasting: 5 Mar 16 22:18:21.994: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:18:21.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-534" for this suite. • [SLOW TEST:24.432 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4496,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:18:22.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 16 22:18:22.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1786' Mar 16 22:18:24.466: INFO: stderr: "" Mar 16 22:18:24.466: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 16 22:18:24.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1786' Mar 16 22:18:24.762: INFO: stderr: "" Mar 16 22:18:24.762: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 16 22:18:25.859: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 22:18:25.859: INFO: Found 0 / 1 Mar 16 22:18:26.780: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 22:18:26.781: INFO: Found 0 / 1 Mar 16 22:18:27.787: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 22:18:27.787: INFO: Found 1 / 1 Mar 16 22:18:27.787: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 16 22:18:27.790: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 22:18:27.790: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 22:18:27.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-5s6vs --namespace=kubectl-1786' Mar 16 22:18:27.966: INFO: stderr: "" Mar 16 22:18:27.966: INFO: stdout: "Name: agnhost-master-5s6vs\nNamespace: kubectl-1786\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Mon, 16 Mar 2020 22:18:24 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.168\nIPs:\n IP: 10.244.1.168\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://d811179743361bd2be0dc30bf60e726590a83b7582ca3f0cca3764b553434f5a\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 16 Mar 2020 22:18:26 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-qw9ww (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-qw9ww:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-qw9ww\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-1786/agnhost-master-5s6vs to jerma-worker\n Normal Pulled 2s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" Mar 16 22:18:27.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1786' Mar 16 22:18:28.106: INFO: stderr: "" Mar 16 22:18:28.106: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1786\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-5s6vs\n" Mar 16 22:18:28.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1786' Mar 16 22:18:28.300: INFO: stderr: "" Mar 16 22:18:28.300: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1786\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.99.99.214\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.168:6379\nSession Affinity: None\nEvents: \n" Mar 16 22:18:28.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 16 22:18:28.424: INFO: stderr: "" Mar 16 22:18:28.424: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Mon, 16 Mar 2020 22:18:24 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 16 Mar 2020 22:17:13 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 16 Mar 2020 22:17:13 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 16 Mar 2020 22:17:13 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 16 Mar 2020 22:17:13 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 27h\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 27h\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27h\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 27h\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 27h\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 27h\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27h\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 27h\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 16 22:18:28.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1786' Mar 16 22:18:28.528: INFO: stderr: "" Mar 16 22:18:28.528: INFO: stdout: "Name: kubectl-1786\nLabels: e2e-framework=kubectl\n e2e-run=1cc8f662-dcb7-4362-9887-8f3eba70548e\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:18:28.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1786" for this suite. • [SLOW TEST:6.532 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1154 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":270,"skipped":4498,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:18:28.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1de33dd8-25b2-44e3-8cb9-92540a575e3c STEP: Creating a pod to test consume secrets Mar 16 22:18:28.667: INFO: Waiting up to 5m0s for pod "pod-secrets-01098f20-374c-4504-88b3-16490086d10c" in namespace "secrets-8837" to be "success or failure" Mar 16 22:18:28.671: INFO: Pod "pod-secrets-01098f20-374c-4504-88b3-16490086d10c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.814534ms Mar 16 22:18:30.675: INFO: Pod "pod-secrets-01098f20-374c-4504-88b3-16490086d10c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007534191s Mar 16 22:18:32.678: INFO: Pod "pod-secrets-01098f20-374c-4504-88b3-16490086d10c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011004863s STEP: Saw pod success Mar 16 22:18:32.678: INFO: Pod "pod-secrets-01098f20-374c-4504-88b3-16490086d10c" satisfied condition "success or failure" Mar 16 22:18:32.681: INFO: Trying to get logs from node jerma-worker pod pod-secrets-01098f20-374c-4504-88b3-16490086d10c container secret-env-test: STEP: delete the pod Mar 16 22:18:32.715: INFO: Waiting for pod pod-secrets-01098f20-374c-4504-88b3-16490086d10c to disappear Mar 16 22:18:32.719: INFO: Pod pod-secrets-01098f20-374c-4504-88b3-16490086d10c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:18:32.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8837" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4502,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:18:32.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 16 22:18:32.768: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:18:46.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3279" for this suite. • [SLOW TEST:14.085 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":272,"skipped":4507,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:18:46.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6168 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-6168 Mar 16 22:18:46.891: INFO: Found 0 stateful pods, waiting for 1 Mar 16 22:18:56.896: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 16 22:18:56.928: INFO: Deleting all statefulset in ns statefulset-6168 Mar 16 22:18:56.944: INFO: Scaling statefulset ss to 0 Mar 16 22:19:27.012: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 22:19:27.016: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:19:27.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6168" for this suite. • [SLOW TEST:40.226 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":273,"skipped":4509,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:19:27.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 16 22:19:27.131: INFO: >>> kubeConfig: /root/.kube/config Mar 16 22:19:29.031: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:19:39.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-803" for this suite. • [SLOW TEST:12.440 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":274,"skipped":4510,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:19:39.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1553 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-1553 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1553 Mar 16 22:19:39.553: INFO: Found 0 stateful pods, waiting for 1 Mar 16 22:19:49.558: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 16 22:19:49.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1553 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 22:19:49.842: INFO: stderr: "I0316 22:19:49.703112 3894 log.go:172] (0xc0009fe000) (0xc0009c8000) Create stream\nI0316 22:19:49.703172 3894 log.go:172] (0xc0009fe000) (0xc0009c8000) Stream added, broadcasting: 1\nI0316 22:19:49.706049 3894 log.go:172] (0xc0009fe000) Reply frame received for 1\nI0316 22:19:49.706084 3894 log.go:172] (0xc0009fe000) (0xc000739c20) Create stream\nI0316 22:19:49.706098 3894 log.go:172] (0xc0009fe000) (0xc000739c20) Stream added, broadcasting: 3\nI0316 22:19:49.707293 3894 log.go:172] (0xc0009fe000) Reply frame received for 3\nI0316 22:19:49.707357 3894 log.go:172] (0xc0009fe000) (0xc0004d6000) Create stream\nI0316 22:19:49.707372 3894 log.go:172] (0xc0009fe000) (0xc0004d6000) Stream added, broadcasting: 5\nI0316 22:19:49.708624 3894 log.go:172] (0xc0009fe000) Reply frame received for 5\nI0316 22:19:49.806172 3894 log.go:172] (0xc0009fe000) Data frame received for 5\nI0316 22:19:49.806204 3894 log.go:172] (0xc0004d6000) (5) Data frame handling\nI0316 22:19:49.806235 3894 log.go:172] (0xc0004d6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 22:19:49.832241 3894 log.go:172] (0xc0009fe000) Data frame received for 3\nI0316 22:19:49.832285 3894 log.go:172] (0xc000739c20) (3) Data frame handling\nI0316 22:19:49.832338 3894 log.go:172] (0xc000739c20) (3) Data frame sent\nI0316 22:19:49.832369 3894 log.go:172] (0xc0009fe000) Data frame received for 3\nI0316 22:19:49.832394 3894 log.go:172] (0xc000739c20) (3) Data frame handling\nI0316 22:19:49.832519 3894 log.go:172] (0xc0009fe000) Data frame received for 5\nI0316 22:19:49.832549 3894 log.go:172] (0xc0004d6000) (5) Data frame handling\nI0316 22:19:49.835060 3894 log.go:172] (0xc0009fe000) Data frame received for 1\nI0316 22:19:49.835100 3894 log.go:172] (0xc0009c8000) (1) Data frame handling\nI0316 22:19:49.835129 3894 log.go:172] (0xc0009c8000) (1) Data frame sent\nI0316 22:19:49.835159 3894 log.go:172] (0xc0009fe000) (0xc0009c8000) Stream removed, broadcasting: 1\nI0316 22:19:49.835192 3894 log.go:172] (0xc0009fe000) Go away received\nI0316 22:19:49.835582 3894 log.go:172] (0xc0009fe000) (0xc0009c8000) Stream removed, broadcasting: 1\nI0316 22:19:49.835610 3894 log.go:172] (0xc0009fe000) (0xc000739c20) Stream removed, broadcasting: 3\nI0316 22:19:49.835634 3894 log.go:172] (0xc0009fe000) (0xc0004d6000) Stream removed, broadcasting: 5\n" Mar 16 22:19:49.842: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 22:19:49.842: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 22:19:49.846: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 16 22:19:59.851: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 22:19:59.851: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 22:19:59.868: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 22:19:59.868: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:39 +0000 UTC }] Mar 16 22:19:59.868: INFO: Mar 16 22:19:59.868: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 16 22:20:00.872: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992984261s Mar 16 22:20:01.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989054557s Mar 16 22:20:02.907: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.958173298s Mar 16 22:20:03.912: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.953516389s Mar 16 22:20:04.917: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.948487185s Mar 16 22:20:05.922: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.943662687s Mar 16 22:20:06.944: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.938408527s Mar 16 22:20:07.949: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.916821074s Mar 16 22:20:08.955: INFO: Verifying statefulset ss doesn't scale past 3 for another 911.647691ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1553 Mar 16 22:20:09.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 22:20:10.213: INFO: stderr: "I0316 22:20:10.137538 3915 log.go:172] (0xc000105600) (0xc000566000) Create stream\nI0316 22:20:10.137588 3915 log.go:172] (0xc000105600) (0xc000566000) Stream added, broadcasting: 1\nI0316 22:20:10.140158 3915 log.go:172] (0xc000105600) Reply frame received for 1\nI0316 22:20:10.140189 3915 log.go:172] (0xc000105600) (0xc000942000) Create stream\nI0316 22:20:10.140198 3915 log.go:172] (0xc000105600) (0xc000942000) Stream added, broadcasting: 3\nI0316 22:20:10.141008 3915 log.go:172] (0xc000105600) Reply frame received for 3\nI0316 22:20:10.141055 3915 log.go:172] (0xc000105600) (0xc0005660a0) Create stream\nI0316 22:20:10.141069 3915 log.go:172] (0xc000105600) (0xc0005660a0) Stream added, broadcasting: 5\nI0316 22:20:10.142414 3915 log.go:172] (0xc000105600) Reply frame received for 5\nI0316 22:20:10.206698 3915 log.go:172] (0xc000105600) Data frame received for 3\nI0316 22:20:10.206730 3915 log.go:172] (0xc000942000) (3) Data frame handling\nI0316 22:20:10.206758 3915 log.go:172] (0xc000942000) (3) Data frame sent\nI0316 22:20:10.206770 3915 log.go:172] (0xc000105600) Data frame received for 3\nI0316 22:20:10.206780 3915 log.go:172] (0xc000942000) (3) Data frame handling\nI0316 22:20:10.206946 3915 log.go:172] (0xc000105600) Data frame received for 5\nI0316 22:20:10.206977 3915 log.go:172] (0xc0005660a0) (5) Data frame handling\nI0316 22:20:10.207000 3915 log.go:172] (0xc0005660a0) (5) Data frame sent\nI0316 22:20:10.207013 3915 log.go:172] (0xc000105600) Data frame received for 5\nI0316 22:20:10.207022 3915 log.go:172] (0xc0005660a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 22:20:10.208482 3915 log.go:172] (0xc000105600) Data frame received for 1\nI0316 22:20:10.208515 3915 log.go:172] (0xc000566000) (1) Data frame handling\nI0316 22:20:10.208556 3915 log.go:172] (0xc000566000) (1) Data frame sent\nI0316 22:20:10.208588 3915 log.go:172] (0xc000105600) (0xc000566000) Stream removed, broadcasting: 1\nI0316 22:20:10.208719 3915 log.go:172] (0xc000105600) Go away received\nI0316 22:20:10.209301 3915 log.go:172] (0xc000105600) (0xc000566000) Stream removed, broadcasting: 1\nI0316 22:20:10.209323 3915 log.go:172] (0xc000105600) (0xc000942000) Stream removed, broadcasting: 3\nI0316 22:20:10.209336 3915 log.go:172] (0xc000105600) (0xc0005660a0) Stream removed, broadcasting: 5\n" Mar 16 22:20:10.213: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 22:20:10.213: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 22:20:10.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1553 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 22:20:10.411: INFO: stderr: "I0316 22:20:10.349419 3939 log.go:172] (0xc0003dcdc0) (0xc000b2e000) Create stream\nI0316 22:20:10.349484 3939 log.go:172] (0xc0003dcdc0) (0xc000b2e000) Stream added, broadcasting: 1\nI0316 22:20:10.352706 3939 log.go:172] (0xc0003dcdc0) Reply frame received for 1\nI0316 22:20:10.352769 3939 log.go:172] (0xc0003dcdc0) (0xc00096a000) Create stream\nI0316 22:20:10.352780 3939 log.go:172] (0xc0003dcdc0) (0xc00096a000) Stream added, broadcasting: 3\nI0316 22:20:10.356332 3939 log.go:172] (0xc0003dcdc0) Reply frame received for 3\nI0316 22:20:10.356373 3939 log.go:172] (0xc0003dcdc0) (0xc0006c5b80) Create stream\nI0316 22:20:10.356384 3939 log.go:172] (0xc0003dcdc0) (0xc0006c5b80) Stream added, broadcasting: 5\nI0316 22:20:10.357468 3939 log.go:172] (0xc0003dcdc0) Reply frame received for 5\nI0316 22:20:10.405950 3939 log.go:172] (0xc0003dcdc0) Data frame received for 5\nI0316 22:20:10.405991 3939 log.go:172] (0xc0006c5b80) (5) Data frame handling\nI0316 22:20:10.406007 3939 log.go:172] (0xc0006c5b80) (5) Data frame sent\nI0316 22:20:10.406019 3939 log.go:172] (0xc0003dcdc0) Data frame received for 5\nI0316 22:20:10.406030 3939 log.go:172] (0xc0006c5b80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0316 22:20:10.406056 3939 log.go:172] (0xc0003dcdc0) Data frame received for 3\nI0316 22:20:10.406071 3939 log.go:172] (0xc00096a000) (3) Data frame handling\nI0316 22:20:10.406082 3939 log.go:172] (0xc00096a000) (3) Data frame sent\nI0316 22:20:10.406092 3939 log.go:172] (0xc0003dcdc0) Data frame received for 3\nI0316 22:20:10.406100 3939 log.go:172] (0xc00096a000) (3) Data frame handling\nI0316 22:20:10.407477 3939 log.go:172] (0xc0003dcdc0) Data frame received for 1\nI0316 22:20:10.407498 3939 log.go:172] (0xc000b2e000) (1) Data frame handling\nI0316 22:20:10.407514 3939 log.go:172] (0xc000b2e000) (1) Data frame sent\nI0316 22:20:10.407522 3939 log.go:172] (0xc0003dcdc0) (0xc000b2e000) Stream removed, broadcasting: 1\nI0316 22:20:10.407581 3939 log.go:172] (0xc0003dcdc0) Go away received\nI0316 22:20:10.407768 3939 log.go:172] (0xc0003dcdc0) (0xc000b2e000) Stream removed, broadcasting: 1\nI0316 22:20:10.407781 3939 log.go:172] (0xc0003dcdc0) (0xc00096a000) Stream removed, broadcasting: 3\nI0316 22:20:10.407787 3939 log.go:172] (0xc0003dcdc0) (0xc0006c5b80) Stream removed, broadcasting: 5\n" Mar 16 22:20:10.411: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 22:20:10.411: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 22:20:10.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1553 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 22:20:10.656: INFO: stderr: "I0316 22:20:10.564723 3960 log.go:172] (0xc00080c0b0) (0xc0005d9a40) Create stream\nI0316 22:20:10.564778 3960 log.go:172] (0xc00080c0b0) (0xc0005d9a40) Stream added, broadcasting: 1\nI0316 22:20:10.568299 3960 log.go:172] (0xc00080c0b0) Reply frame received for 1\nI0316 22:20:10.568357 3960 log.go:172] (0xc00080c0b0) (0xc00046e820) Create stream\nI0316 22:20:10.568372 3960 log.go:172] (0xc00080c0b0) (0xc00046e820) Stream added, broadcasting: 3\nI0316 22:20:10.569758 3960 log.go:172] (0xc00080c0b0) Reply frame received for 3\nI0316 22:20:10.569805 3960 log.go:172] (0xc00080c0b0) (0xc0008f6000) Create stream\nI0316 22:20:10.569819 3960 log.go:172] (0xc00080c0b0) (0xc0008f6000) Stream added, broadcasting: 5\nI0316 22:20:10.570922 3960 log.go:172] (0xc00080c0b0) Reply frame received for 5\nI0316 22:20:10.649066 3960 log.go:172] (0xc00080c0b0) Data frame received for 3\nI0316 22:20:10.649197 3960 log.go:172] (0xc00046e820) (3) Data frame handling\nI0316 22:20:10.649230 3960 log.go:172] (0xc00046e820) (3) Data frame sent\nI0316 22:20:10.649250 3960 log.go:172] (0xc00080c0b0) Data frame received for 3\nI0316 22:20:10.649267 3960 log.go:172] (0xc00046e820) (3) Data frame handling\nI0316 22:20:10.649324 3960 log.go:172] (0xc00080c0b0) Data frame received for 5\nI0316 22:20:10.649342 3960 log.go:172] (0xc0008f6000) (5) Data frame handling\nI0316 22:20:10.649354 3960 log.go:172] (0xc0008f6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0316 22:20:10.649455 3960 log.go:172] (0xc00080c0b0) Data frame received for 5\nI0316 22:20:10.649479 3960 log.go:172] (0xc0008f6000) (5) Data frame handling\nI0316 22:20:10.651582 3960 log.go:172] (0xc00080c0b0) Data frame received for 1\nI0316 22:20:10.651625 3960 log.go:172] (0xc0005d9a40) (1) Data frame handling\nI0316 22:20:10.651655 3960 log.go:172] (0xc0005d9a40) (1) Data frame sent\nI0316 22:20:10.651688 3960 log.go:172] (0xc00080c0b0) (0xc0005d9a40) Stream removed, broadcasting: 1\nI0316 22:20:10.651723 3960 log.go:172] (0xc00080c0b0) Go away received\nI0316 22:20:10.652183 3960 log.go:172] (0xc00080c0b0) (0xc0005d9a40) Stream removed, broadcasting: 1\nI0316 22:20:10.652209 3960 log.go:172] (0xc00080c0b0) (0xc00046e820) Stream removed, broadcasting: 3\nI0316 22:20:10.652233 3960 log.go:172] (0xc00080c0b0) (0xc0008f6000) Stream removed, broadcasting: 5\n" Mar 16 22:20:10.656: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 22:20:10.656: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 22:20:10.661: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 22:20:10.661: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 22:20:10.661: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 16 22:20:10.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1553 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 22:20:10.862: INFO: stderr: "I0316 22:20:10.795756 3982 log.go:172] (0xc00011aa50) (0xc000a08000) Create stream\nI0316 22:20:10.795806 3982 log.go:172] (0xc00011aa50) (0xc000a08000) Stream added, broadcasting: 1\nI0316 22:20:10.798178 3982 log.go:172] (0xc00011aa50) Reply frame received for 1\nI0316 22:20:10.798218 3982 log.go:172] (0xc00011aa50) (0xc0006c1c20) Create stream\nI0316 22:20:10.798230 3982 log.go:172] (0xc00011aa50) (0xc0006c1c20) Stream added, broadcasting: 3\nI0316 22:20:10.799278 3982 log.go:172] (0xc00011aa50) Reply frame received for 3\nI0316 22:20:10.799321 3982 log.go:172] (0xc00011aa50) (0xc000a080a0) Create stream\nI0316 22:20:10.799331 3982 log.go:172] (0xc00011aa50) (0xc000a080a0) Stream added, broadcasting: 5\nI0316 22:20:10.800561 3982 log.go:172] (0xc00011aa50) Reply frame received for 5\nI0316 22:20:10.855340 3982 log.go:172] (0xc00011aa50) Data frame received for 5\nI0316 22:20:10.855372 3982 log.go:172] (0xc000a080a0) (5) Data frame handling\nI0316 22:20:10.855392 3982 log.go:172] (0xc000a080a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 22:20:10.855425 3982 log.go:172] (0xc00011aa50) Data frame received for 3\nI0316 22:20:10.855487 3982 log.go:172] (0xc0006c1c20) (3) Data frame handling\nI0316 22:20:10.855515 3982 log.go:172] (0xc0006c1c20) (3) Data frame sent\nI0316 22:20:10.855539 3982 log.go:172] (0xc00011aa50) Data frame received for 5\nI0316 22:20:10.855596 3982 log.go:172] (0xc000a080a0) (5) Data frame handling\nI0316 22:20:10.855637 3982 log.go:172] (0xc00011aa50) Data frame received for 3\nI0316 22:20:10.855662 3982 log.go:172] (0xc0006c1c20) (3) Data frame handling\nI0316 22:20:10.857356 3982 log.go:172] (0xc00011aa50) Data frame received for 1\nI0316 22:20:10.857381 3982 log.go:172] (0xc000a08000) (1) Data frame handling\nI0316 22:20:10.857394 3982 log.go:172] (0xc000a08000) (1) Data frame sent\nI0316 22:20:10.857409 3982 log.go:172] (0xc00011aa50) (0xc000a08000) Stream removed, broadcasting: 1\nI0316 22:20:10.857796 3982 log.go:172] (0xc00011aa50) (0xc000a08000) Stream removed, broadcasting: 1\nI0316 22:20:10.857818 3982 log.go:172] (0xc00011aa50) (0xc0006c1c20) Stream removed, broadcasting: 3\nI0316 22:20:10.857830 3982 log.go:172] (0xc00011aa50) (0xc000a080a0) Stream removed, broadcasting: 5\n" Mar 16 22:20:10.862: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 22:20:10.862: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 22:20:10.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1553 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 22:20:11.111: INFO: stderr: "I0316 22:20:11.002532 4002 log.go:172] (0xc0005260b0) (0xc000677c20) Create stream\nI0316 22:20:11.002603 4002 log.go:172] (0xc0005260b0) (0xc000677c20) Stream added, broadcasting: 1\nI0316 22:20:11.006425 4002 log.go:172] (0xc0005260b0) Reply frame received for 1\nI0316 22:20:11.006485 4002 log.go:172] (0xc0005260b0) (0xc0009b8000) Create stream\nI0316 22:20:11.006499 4002 log.go:172] (0xc0005260b0) (0xc0009b8000) Stream added, broadcasting: 3\nI0316 22:20:11.007581 4002 log.go:172] (0xc0005260b0) Reply frame received for 3\nI0316 22:20:11.007617 4002 log.go:172] (0xc0005260b0) (0xc000735400) Create stream\nI0316 22:20:11.007637 4002 log.go:172] (0xc0005260b0) (0xc000735400) Stream added, broadcasting: 5\nI0316 22:20:11.009009 4002 log.go:172] (0xc0005260b0) Reply frame received for 5\nI0316 22:20:11.074895 4002 log.go:172] (0xc0005260b0) Data frame received for 5\nI0316 22:20:11.074928 4002 log.go:172] (0xc000735400) (5) Data frame handling\nI0316 22:20:11.074967 4002 log.go:172] (0xc000735400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 22:20:11.103870 4002 log.go:172] (0xc0005260b0) Data frame received for 3\nI0316 22:20:11.103913 4002 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0316 22:20:11.103953 4002 log.go:172] (0xc0009b8000) (3) Data frame sent\nI0316 22:20:11.104180 4002 log.go:172] (0xc0005260b0) Data frame received for 5\nI0316 22:20:11.104218 4002 log.go:172] (0xc000735400) (5) Data frame handling\nI0316 22:20:11.104391 4002 log.go:172] (0xc0005260b0) Data frame received for 3\nI0316 22:20:11.104416 4002 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0316 22:20:11.106615 4002 log.go:172] (0xc0005260b0) Data frame received for 1\nI0316 22:20:11.106643 4002 log.go:172] (0xc000677c20) (1) Data frame handling\nI0316 22:20:11.106660 4002 log.go:172] (0xc000677c20) (1) Data frame sent\nI0316 22:20:11.106675 4002 log.go:172] (0xc0005260b0) (0xc000677c20) Stream removed, broadcasting: 1\nI0316 22:20:11.106694 4002 log.go:172] (0xc0005260b0) Go away received\nI0316 22:20:11.107116 4002 log.go:172] (0xc0005260b0) (0xc000677c20) Stream removed, broadcasting: 1\nI0316 22:20:11.107137 4002 log.go:172] (0xc0005260b0) (0xc0009b8000) Stream removed, broadcasting: 3\nI0316 22:20:11.107148 4002 log.go:172] (0xc0005260b0) (0xc000735400) Stream removed, broadcasting: 5\n" Mar 16 22:20:11.111: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 22:20:11.111: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 22:20:11.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1553 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 22:20:11.337: INFO: stderr: "I0316 22:20:11.231737 4022 log.go:172] (0xc000a44fd0) (0xc000a7a140) Create stream\nI0316 22:20:11.231780 4022 log.go:172] (0xc000a44fd0) (0xc000a7a140) Stream added, broadcasting: 1\nI0316 22:20:11.234573 4022 log.go:172] (0xc000a44fd0) Reply frame received for 1\nI0316 22:20:11.234619 4022 log.go:172] (0xc000a44fd0) (0xc00094e0a0) Create stream\nI0316 22:20:11.234633 4022 log.go:172] (0xc000a44fd0) (0xc00094e0a0) Stream added, broadcasting: 3\nI0316 22:20:11.235694 4022 log.go:172] (0xc000a44fd0) Reply frame received for 3\nI0316 22:20:11.235756 4022 log.go:172] (0xc000a44fd0) (0xc000a7a1e0) Create stream\nI0316 22:20:11.235775 4022 log.go:172] (0xc000a44fd0) (0xc000a7a1e0) Stream added, broadcasting: 5\nI0316 22:20:11.236649 4022 log.go:172] (0xc000a44fd0) Reply frame received for 5\nI0316 22:20:11.303060 4022 log.go:172] (0xc000a44fd0) Data frame received for 5\nI0316 22:20:11.303093 4022 log.go:172] (0xc000a7a1e0) (5) Data frame handling\nI0316 22:20:11.303119 4022 log.go:172] (0xc000a7a1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 22:20:11.332114 4022 log.go:172] (0xc000a44fd0) Data frame received for 5\nI0316 22:20:11.332136 4022 log.go:172] (0xc000a7a1e0) (5) Data frame handling\nI0316 22:20:11.332172 4022 log.go:172] (0xc000a44fd0) Data frame received for 3\nI0316 22:20:11.332220 4022 log.go:172] (0xc00094e0a0) (3) Data frame handling\nI0316 22:20:11.332268 4022 log.go:172] (0xc00094e0a0) (3) Data frame sent\nI0316 22:20:11.332299 4022 log.go:172] (0xc000a44fd0) Data frame received for 3\nI0316 22:20:11.332323 4022 log.go:172] (0xc00094e0a0) (3) Data frame handling\nI0316 22:20:11.333852 4022 log.go:172] (0xc000a44fd0) Data frame received for 1\nI0316 22:20:11.333898 4022 log.go:172] (0xc000a7a140) (1) Data frame handling\nI0316 22:20:11.333923 4022 log.go:172] (0xc000a7a140) (1) Data frame sent\nI0316 22:20:11.333938 4022 log.go:172] (0xc000a44fd0) (0xc000a7a140) Stream removed, broadcasting: 1\nI0316 22:20:11.333953 4022 log.go:172] (0xc000a44fd0) Go away received\nI0316 22:20:11.334200 4022 log.go:172] (0xc000a44fd0) (0xc000a7a140) Stream removed, broadcasting: 1\nI0316 22:20:11.334214 4022 log.go:172] (0xc000a44fd0) (0xc00094e0a0) Stream removed, broadcasting: 3\nI0316 22:20:11.334220 4022 log.go:172] (0xc000a44fd0) (0xc000a7a1e0) Stream removed, broadcasting: 5\n" Mar 16 22:20:11.337: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 22:20:11.337: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 22:20:11.337: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 22:20:11.340: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 16 22:20:21.348: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 22:20:21.348: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 16 22:20:21.348: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 16 22:20:21.364: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 22:20:21.364: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:39 +0000 UTC }] Mar 16 22:20:21.364: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:21.364: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:21.364: INFO: Mar 16 22:20:21.364: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 16 22:20:22.484: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 22:20:22.484: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:39 +0000 UTC }] Mar 16 22:20:22.484: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:22.484: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:22.484: INFO: Mar 16 22:20:22.484: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 16 22:20:23.489: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 22:20:23.489: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:39 +0000 UTC }] Mar 16 22:20:23.489: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:23.489: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:23.489: INFO: Mar 16 22:20:23.489: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 16 22:20:24.493: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 22:20:24.493: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:24.493: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:24.493: INFO: Mar 16 22:20:24.493: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 16 22:20:25.497: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 22:20:25.497: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:25.497: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:25.497: INFO: Mar 16 22:20:25.497: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 16 22:20:26.501: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 22:20:26.502: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:26.502: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:26.502: INFO: Mar 16 22:20:26.502: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 16 22:20:27.507: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 22:20:27.507: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:27.507: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:27.507: INFO: Mar 16 22:20:27.507: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 16 22:20:28.511: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 22:20:28.511: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:28.512: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:20:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 22:19:59 +0000 UTC }] Mar 16 22:20:28.512: INFO: Mar 16 22:20:28.512: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 16 22:20:29.542: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.84379048s Mar 16 22:20:30.547: INFO: Verifying statefulset ss doesn't scale past 0 for another 813.111138ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1553 Mar 16 22:20:31.551: INFO: Scaling statefulset ss to 0 Mar 16 22:20:31.561: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 16 22:20:31.563: INFO: Deleting all statefulset in ns statefulset-1553 Mar 16 22:20:31.566: INFO: Scaling statefulset ss to 0 Mar 16 22:20:31.574: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 22:20:31.576: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:20:31.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1553" for this suite. • [SLOW TEST:52.117 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":275,"skipped":4519,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:20:31.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 16 22:20:31.643: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:20:48.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9755" for this suite. • [SLOW TEST:16.449 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":276,"skipped":4533,"failed":0} [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:20:48.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:20:48.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1832" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4533,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 16 22:20:48.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 22:20:48.903: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 22:20:51.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719994048, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719994048, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719994048, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719994048, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 22:20:54.422: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration Mar 16 22:20:54.481: INFO: Waiting for webhook configuration to be ready... STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 16 22:20:54.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8282" for this suite. STEP: Destroying namespace "webhook-8282-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.572 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":278,"skipped":4542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSMar 16 22:20:54.778: INFO: Running AfterSuite actions on all nodes Mar 16 22:20:54.778: INFO: Running AfterSuite actions on node 1 Mar 16 22:20:54.778: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4565,"failed":0} Ran 278 of 4843 Specs in 4432.538 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4565 Skipped PASS