I0310 21:07:52.184744 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0310 21:07:52.184919 6 e2e.go:109] Starting e2e run "2d7bcd85-e710-4684-905a-e4f1c05fcad0" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583874471 - Will randomize all specs Will run 278 of 4814 specs Mar 10 21:07:52.233: INFO: >>> kubeConfig: /root/.kube/config Mar 10 21:07:52.237: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 10 21:07:52.258: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 10 21:07:52.298: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 10 21:07:52.298: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 10 21:07:52.298: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 10 21:07:52.305: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 10 21:07:52.305: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 10 21:07:52.305: INFO: e2e test version: v1.17.0 Mar 10 21:07:52.306: INFO: kube-apiserver version: v1.17.2 Mar 10 21:07:52.306: INFO: >>> kubeConfig: /root/.kube/config Mar 10 21:07:52.309: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:07:52.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts Mar 10 21:07:52.371: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 10 21:08:00.403: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7920 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:00.403: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:00.436890 6 log.go:172] (0xc002b093f0) (0xc00181c1e0) Create stream I0310 21:08:00.436917 6 log.go:172] (0xc002b093f0) (0xc00181c1e0) Stream added, broadcasting: 1 I0310 21:08:00.439457 6 log.go:172] (0xc002b093f0) Reply frame received for 1 I0310 21:08:00.439498 6 log.go:172] (0xc002b093f0) (0xc00184a0a0) Create stream I0310 21:08:00.439509 6 log.go:172] (0xc002b093f0) (0xc00184a0a0) Stream added, broadcasting: 3 I0310 21:08:00.440770 6 log.go:172] (0xc002b093f0) Reply frame received for 3 I0310 21:08:00.440813 6 log.go:172] (0xc002b093f0) (0xc00181c3c0) Create stream I0310 21:08:00.440824 6 log.go:172] (0xc002b093f0) (0xc00181c3c0) Stream added, broadcasting: 5 I0310 21:08:00.441694 6 log.go:172] (0xc002b093f0) Reply frame received for 5 I0310 21:08:00.501415 6 log.go:172] (0xc002b093f0) Data frame received for 5 I0310 21:08:00.501453 6 log.go:172] (0xc00181c3c0) (5) Data frame handling I0310 21:08:00.501477 6 log.go:172] (0xc002b093f0) Data frame received for 3 I0310 21:08:00.501496 6 log.go:172] (0xc00184a0a0) (3) Data frame handling I0310 21:08:00.501512 6 log.go:172] (0xc00184a0a0) (3) Data frame sent I0310 21:08:00.501595 6 log.go:172] (0xc002b093f0) Data frame received for 3 I0310 21:08:00.501626 6 log.go:172] (0xc00184a0a0) (3) Data frame handling I0310 21:08:00.503413 6 log.go:172] (0xc002b093f0) Data frame received for 1 I0310 21:08:00.503437 6 log.go:172] (0xc00181c1e0) (1) Data frame handling I0310 21:08:00.503452 6 log.go:172] (0xc00181c1e0) (1) Data frame sent I0310 21:08:00.503725 6 log.go:172] (0xc002b093f0) (0xc00181c1e0) Stream removed, broadcasting: 1 I0310 21:08:00.503864 6 log.go:172] (0xc002b093f0) Go away received I0310 21:08:00.504188 6 log.go:172] (0xc002b093f0) (0xc00181c1e0) Stream removed, broadcasting: 1 I0310 21:08:00.504206 6 log.go:172] (0xc002b093f0) (0xc00184a0a0) Stream removed, broadcasting: 3 I0310 21:08:00.504216 6 log.go:172] (0xc002b093f0) (0xc00181c3c0) Stream removed, broadcasting: 5 Mar 10 21:08:00.504: INFO: Exec stderr: "" Mar 10 21:08:00.504: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7920 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:00.504: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:00.537566 6 log.go:172] (0xc0022f1ad0) (0xc002b230e0) Create stream I0310 21:08:00.537620 6 log.go:172] (0xc0022f1ad0) (0xc002b230e0) Stream added, broadcasting: 1 I0310 21:08:00.547529 6 log.go:172] (0xc0022f1ad0) Reply frame received for 1 I0310 21:08:00.547569 6 log.go:172] (0xc0022f1ad0) (0xc00184a1e0) Create stream I0310 21:08:00.547580 6 log.go:172] (0xc0022f1ad0) (0xc00184a1e0) Stream added, broadcasting: 3 I0310 21:08:00.548390 6 log.go:172] (0xc0022f1ad0) Reply frame received for 3 I0310 21:08:00.548432 6 log.go:172] (0xc0022f1ad0) (0xc00181c780) Create stream I0310 21:08:00.548443 6 log.go:172] (0xc0022f1ad0) (0xc00181c780) Stream added, broadcasting: 5 I0310 21:08:00.549636 6 log.go:172] (0xc0022f1ad0) Reply frame received for 5 I0310 21:08:00.609718 6 log.go:172] (0xc0022f1ad0) Data frame received for 3 I0310 21:08:00.609743 6 log.go:172] (0xc00184a1e0) (3) Data frame handling I0310 21:08:00.609752 6 log.go:172] (0xc00184a1e0) (3) Data frame sent I0310 21:08:00.609879 6 log.go:172] (0xc0022f1ad0) Data frame received for 5 I0310 21:08:00.609900 6 log.go:172] (0xc00181c780) (5) Data frame handling I0310 21:08:00.609931 6 log.go:172] (0xc0022f1ad0) Data frame received for 3 I0310 21:08:00.609952 6 log.go:172] (0xc00184a1e0) (3) Data frame handling I0310 21:08:00.611086 6 log.go:172] (0xc0022f1ad0) Data frame received for 1 I0310 21:08:00.611112 6 log.go:172] (0xc002b230e0) (1) Data frame handling I0310 21:08:00.611134 6 log.go:172] (0xc002b230e0) (1) Data frame sent I0310 21:08:00.611154 6 log.go:172] (0xc0022f1ad0) (0xc002b230e0) Stream removed, broadcasting: 1 I0310 21:08:00.611169 6 log.go:172] (0xc0022f1ad0) Go away received I0310 21:08:00.611309 6 log.go:172] (0xc0022f1ad0) (0xc002b230e0) Stream removed, broadcasting: 1 I0310 21:08:00.611332 6 log.go:172] (0xc0022f1ad0) (0xc00184a1e0) Stream removed, broadcasting: 3 I0310 21:08:00.611344 6 log.go:172] (0xc0022f1ad0) (0xc00181c780) Stream removed, broadcasting: 5 Mar 10 21:08:00.611: INFO: Exec stderr: "" Mar 10 21:08:00.611: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7920 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:00.611: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:00.638070 6 log.go:172] (0xc002b09a20) (0xc00181c960) Create stream I0310 21:08:00.638086 6 log.go:172] (0xc002b09a20) (0xc00181c960) Stream added, broadcasting: 1 I0310 21:08:00.640723 6 log.go:172] (0xc002b09a20) Reply frame received for 1 I0310 21:08:00.640752 6 log.go:172] (0xc002b09a20) (0xc00184a320) Create stream I0310 21:08:00.640759 6 log.go:172] (0xc002b09a20) (0xc00184a320) Stream added, broadcasting: 3 I0310 21:08:00.641495 6 log.go:172] (0xc002b09a20) Reply frame received for 3 I0310 21:08:00.641521 6 log.go:172] (0xc002b09a20) (0xc00184a3c0) Create stream I0310 21:08:00.641531 6 log.go:172] (0xc002b09a20) (0xc00184a3c0) Stream added, broadcasting: 5 I0310 21:08:00.642352 6 log.go:172] (0xc002b09a20) Reply frame received for 5 I0310 21:08:00.712454 6 log.go:172] (0xc002b09a20) Data frame received for 3 I0310 21:08:00.712489 6 log.go:172] (0xc002b09a20) Data frame received for 5 I0310 21:08:00.712519 6 log.go:172] (0xc00184a3c0) (5) Data frame handling I0310 21:08:00.712544 6 log.go:172] (0xc00184a320) (3) Data frame handling I0310 21:08:00.712556 6 log.go:172] (0xc00184a320) (3) Data frame sent I0310 21:08:00.712569 6 log.go:172] (0xc002b09a20) Data frame received for 3 I0310 21:08:00.712581 6 log.go:172] (0xc00184a320) (3) Data frame handling I0310 21:08:00.713556 6 log.go:172] (0xc002b09a20) Data frame received for 1 I0310 21:08:00.713574 6 log.go:172] (0xc00181c960) (1) Data frame handling I0310 21:08:00.713589 6 log.go:172] (0xc00181c960) (1) Data frame sent I0310 21:08:00.713765 6 log.go:172] (0xc002b09a20) (0xc00181c960) Stream removed, broadcasting: 1 I0310 21:08:00.713795 6 log.go:172] (0xc002b09a20) Go away received I0310 21:08:00.714028 6 log.go:172] (0xc002b09a20) (0xc00181c960) Stream removed, broadcasting: 1 I0310 21:08:00.714057 6 log.go:172] (0xc002b09a20) (0xc00184a320) Stream removed, broadcasting: 3 I0310 21:08:00.714065 6 log.go:172] (0xc002b09a20) (0xc00184a3c0) Stream removed, broadcasting: 5 Mar 10 21:08:00.714: INFO: Exec stderr: "" Mar 10 21:08:00.714: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7920 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:00.714: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:00.741912 6 log.go:172] (0xc001104370) (0xc00171a460) Create stream I0310 21:08:00.741938 6 log.go:172] (0xc001104370) (0xc00171a460) Stream added, broadcasting: 1 I0310 21:08:00.747342 6 log.go:172] (0xc001104370) Reply frame received for 1 I0310 21:08:00.747395 6 log.go:172] (0xc001104370) (0xc001bdb680) Create stream I0310 21:08:00.747410 6 log.go:172] (0xc001104370) (0xc001bdb680) Stream added, broadcasting: 3 I0310 21:08:00.748641 6 log.go:172] (0xc001104370) Reply frame received for 3 I0310 21:08:00.748676 6 log.go:172] (0xc001104370) (0xc00171a500) Create stream I0310 21:08:00.748686 6 log.go:172] (0xc001104370) (0xc00171a500) Stream added, broadcasting: 5 I0310 21:08:00.749430 6 log.go:172] (0xc001104370) Reply frame received for 5 I0310 21:08:00.808835 6 log.go:172] (0xc001104370) Data frame received for 5 I0310 21:08:00.808865 6 log.go:172] (0xc00171a500) (5) Data frame handling I0310 21:08:00.808897 6 log.go:172] (0xc001104370) Data frame received for 3 I0310 21:08:00.808926 6 log.go:172] (0xc001bdb680) (3) Data frame handling I0310 21:08:00.808949 6 log.go:172] (0xc001bdb680) (3) Data frame sent I0310 21:08:00.808963 6 log.go:172] (0xc001104370) Data frame received for 3 I0310 21:08:00.808971 6 log.go:172] (0xc001bdb680) (3) Data frame handling I0310 21:08:00.810231 6 log.go:172] (0xc001104370) Data frame received for 1 I0310 21:08:00.810277 6 log.go:172] (0xc00171a460) (1) Data frame handling I0310 21:08:00.810296 6 log.go:172] (0xc00171a460) (1) Data frame sent I0310 21:08:00.810311 6 log.go:172] (0xc001104370) (0xc00171a460) Stream removed, broadcasting: 1 I0310 21:08:00.810334 6 log.go:172] (0xc001104370) Go away received I0310 21:08:00.810380 6 log.go:172] (0xc001104370) (0xc00171a460) Stream removed, broadcasting: 1 I0310 21:08:00.810401 6 log.go:172] (0xc001104370) (0xc001bdb680) Stream removed, broadcasting: 3 I0310 21:08:00.810409 6 log.go:172] (0xc001104370) (0xc00171a500) Stream removed, broadcasting: 5 Mar 10 21:08:00.810: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 10 21:08:00.810: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7920 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:00.810: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:00.836563 6 log.go:172] (0xc0011049a0) (0xc00171a820) Create stream I0310 21:08:00.836585 6 log.go:172] (0xc0011049a0) (0xc00171a820) Stream added, broadcasting: 1 I0310 21:08:00.838610 6 log.go:172] (0xc0011049a0) Reply frame received for 1 I0310 21:08:00.838639 6 log.go:172] (0xc0011049a0) (0xc001bdb900) Create stream I0310 21:08:00.838649 6 log.go:172] (0xc0011049a0) (0xc001bdb900) Stream added, broadcasting: 3 I0310 21:08:00.839334 6 log.go:172] (0xc0011049a0) Reply frame received for 3 I0310 21:08:00.839362 6 log.go:172] (0xc0011049a0) (0xc00184a500) Create stream I0310 21:08:00.839372 6 log.go:172] (0xc0011049a0) (0xc00184a500) Stream added, broadcasting: 5 I0310 21:08:00.840059 6 log.go:172] (0xc0011049a0) Reply frame received for 5 I0310 21:08:00.896714 6 log.go:172] (0xc0011049a0) Data frame received for 3 I0310 21:08:00.896737 6 log.go:172] (0xc001bdb900) (3) Data frame handling I0310 21:08:00.896746 6 log.go:172] (0xc001bdb900) (3) Data frame sent I0310 21:08:00.896755 6 log.go:172] (0xc0011049a0) Data frame received for 3 I0310 21:08:00.896765 6 log.go:172] (0xc001bdb900) (3) Data frame handling I0310 21:08:00.896780 6 log.go:172] (0xc0011049a0) Data frame received for 5 I0310 21:08:00.896790 6 log.go:172] (0xc00184a500) (5) Data frame handling I0310 21:08:00.898160 6 log.go:172] (0xc0011049a0) Data frame received for 1 I0310 21:08:00.898215 6 log.go:172] (0xc00171a820) (1) Data frame handling I0310 21:08:00.898242 6 log.go:172] (0xc00171a820) (1) Data frame sent I0310 21:08:00.898261 6 log.go:172] (0xc0011049a0) (0xc00171a820) Stream removed, broadcasting: 1 I0310 21:08:00.898293 6 log.go:172] (0xc0011049a0) Go away received I0310 21:08:00.898368 6 log.go:172] (0xc0011049a0) (0xc00171a820) Stream removed, broadcasting: 1 I0310 21:08:00.898387 6 log.go:172] (0xc0011049a0) (0xc001bdb900) Stream removed, broadcasting: 3 I0310 21:08:00.898398 6 log.go:172] (0xc0011049a0) (0xc00184a500) Stream removed, broadcasting: 5 Mar 10 21:08:00.898: INFO: Exec stderr: "" Mar 10 21:08:00.898: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7920 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:00.898: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:00.926153 6 log.go:172] (0xc001488160) (0xc002b232c0) Create stream I0310 21:08:00.926174 6 log.go:172] (0xc001488160) (0xc002b232c0) Stream added, broadcasting: 1 I0310 21:08:00.928362 6 log.go:172] (0xc001488160) Reply frame received for 1 I0310 21:08:00.928400 6 log.go:172] (0xc001488160) (0xc002b23360) Create stream I0310 21:08:00.928452 6 log.go:172] (0xc001488160) (0xc002b23360) Stream added, broadcasting: 3 I0310 21:08:00.929293 6 log.go:172] (0xc001488160) Reply frame received for 3 I0310 21:08:00.929355 6 log.go:172] (0xc001488160) (0xc00181ca00) Create stream I0310 21:08:00.929369 6 log.go:172] (0xc001488160) (0xc00181ca00) Stream added, broadcasting: 5 I0310 21:08:00.930277 6 log.go:172] (0xc001488160) Reply frame received for 5 I0310 21:08:01.000804 6 log.go:172] (0xc001488160) Data frame received for 5 I0310 21:08:01.000840 6 log.go:172] (0xc00181ca00) (5) Data frame handling I0310 21:08:01.000863 6 log.go:172] (0xc001488160) Data frame received for 3 I0310 21:08:01.000875 6 log.go:172] (0xc002b23360) (3) Data frame handling I0310 21:08:01.000885 6 log.go:172] (0xc002b23360) (3) Data frame sent I0310 21:08:01.000896 6 log.go:172] (0xc001488160) Data frame received for 3 I0310 21:08:01.000908 6 log.go:172] (0xc002b23360) (3) Data frame handling I0310 21:08:01.001948 6 log.go:172] (0xc001488160) Data frame received for 1 I0310 21:08:01.001969 6 log.go:172] (0xc002b232c0) (1) Data frame handling I0310 21:08:01.001984 6 log.go:172] (0xc002b232c0) (1) Data frame sent I0310 21:08:01.002005 6 log.go:172] (0xc001488160) (0xc002b232c0) Stream removed, broadcasting: 1 I0310 21:08:01.002067 6 log.go:172] (0xc001488160) Go away received I0310 21:08:01.002101 6 log.go:172] (0xc001488160) (0xc002b232c0) Stream removed, broadcasting: 1 I0310 21:08:01.002160 6 log.go:172] (0xc001488160) (0xc002b23360) Stream removed, broadcasting: 3 I0310 21:08:01.002184 6 log.go:172] (0xc001488160) (0xc00181ca00) Stream removed, broadcasting: 5 Mar 10 21:08:01.002: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 10 21:08:01.002: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7920 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:01.002: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:01.032187 6 log.go:172] (0xc0015a2370) (0xc00184a960) Create stream I0310 21:08:01.032209 6 log.go:172] (0xc0015a2370) (0xc00184a960) Stream added, broadcasting: 1 I0310 21:08:01.034665 6 log.go:172] (0xc0015a2370) Reply frame received for 1 I0310 21:08:01.034701 6 log.go:172] (0xc0015a2370) (0xc001bdb9a0) Create stream I0310 21:08:01.034715 6 log.go:172] (0xc0015a2370) (0xc001bdb9a0) Stream added, broadcasting: 3 I0310 21:08:01.035619 6 log.go:172] (0xc0015a2370) Reply frame received for 3 I0310 21:08:01.035645 6 log.go:172] (0xc0015a2370) (0xc002b23400) Create stream I0310 21:08:01.035655 6 log.go:172] (0xc0015a2370) (0xc002b23400) Stream added, broadcasting: 5 I0310 21:08:01.036423 6 log.go:172] (0xc0015a2370) Reply frame received for 5 I0310 21:08:01.088665 6 log.go:172] (0xc0015a2370) Data frame received for 5 I0310 21:08:01.088703 6 log.go:172] (0xc002b23400) (5) Data frame handling I0310 21:08:01.088725 6 log.go:172] (0xc0015a2370) Data frame received for 3 I0310 21:08:01.088735 6 log.go:172] (0xc001bdb9a0) (3) Data frame handling I0310 21:08:01.088745 6 log.go:172] (0xc001bdb9a0) (3) Data frame sent I0310 21:08:01.088754 6 log.go:172] (0xc0015a2370) Data frame received for 3 I0310 21:08:01.088758 6 log.go:172] (0xc001bdb9a0) (3) Data frame handling I0310 21:08:01.089796 6 log.go:172] (0xc0015a2370) Data frame received for 1 I0310 21:08:01.089815 6 log.go:172] (0xc00184a960) (1) Data frame handling I0310 21:08:01.089823 6 log.go:172] (0xc00184a960) (1) Data frame sent I0310 21:08:01.089841 6 log.go:172] (0xc0015a2370) (0xc00184a960) Stream removed, broadcasting: 1 I0310 21:08:01.089855 6 log.go:172] (0xc0015a2370) Go away received I0310 21:08:01.089917 6 log.go:172] (0xc0015a2370) (0xc00184a960) Stream removed, broadcasting: 1 I0310 21:08:01.089951 6 log.go:172] (0xc0015a2370) (0xc001bdb9a0) Stream removed, broadcasting: 3 I0310 21:08:01.089974 6 log.go:172] (0xc0015a2370) (0xc002b23400) Stream removed, broadcasting: 5 Mar 10 21:08:01.090: INFO: Exec stderr: "" Mar 10 21:08:01.090: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7920 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:01.090: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:01.115357 6 log.go:172] (0xc001488790) (0xc002b235e0) Create stream I0310 21:08:01.115387 6 log.go:172] (0xc001488790) (0xc002b235e0) Stream added, broadcasting: 1 I0310 21:08:01.117859 6 log.go:172] (0xc001488790) Reply frame received for 1 I0310 21:08:01.117883 6 log.go:172] (0xc001488790) (0xc002b23680) Create stream I0310 21:08:01.117890 6 log.go:172] (0xc001488790) (0xc002b23680) Stream added, broadcasting: 3 I0310 21:08:01.118715 6 log.go:172] (0xc001488790) Reply frame received for 3 I0310 21:08:01.118750 6 log.go:172] (0xc001488790) (0xc002b237c0) Create stream I0310 21:08:01.118775 6 log.go:172] (0xc001488790) (0xc002b237c0) Stream added, broadcasting: 5 I0310 21:08:01.119490 6 log.go:172] (0xc001488790) Reply frame received for 5 I0310 21:08:01.185249 6 log.go:172] (0xc001488790) Data frame received for 5 I0310 21:08:01.185277 6 log.go:172] (0xc002b237c0) (5) Data frame handling I0310 21:08:01.185296 6 log.go:172] (0xc001488790) Data frame received for 3 I0310 21:08:01.185303 6 log.go:172] (0xc002b23680) (3) Data frame handling I0310 21:08:01.185312 6 log.go:172] (0xc002b23680) (3) Data frame sent I0310 21:08:01.185318 6 log.go:172] (0xc001488790) Data frame received for 3 I0310 21:08:01.185325 6 log.go:172] (0xc002b23680) (3) Data frame handling I0310 21:08:01.186250 6 log.go:172] (0xc001488790) Data frame received for 1 I0310 21:08:01.186266 6 log.go:172] (0xc002b235e0) (1) Data frame handling I0310 21:08:01.186276 6 log.go:172] (0xc002b235e0) (1) Data frame sent I0310 21:08:01.186294 6 log.go:172] (0xc001488790) (0xc002b235e0) Stream removed, broadcasting: 1 I0310 21:08:01.186310 6 log.go:172] (0xc001488790) Go away received I0310 21:08:01.186440 6 log.go:172] (0xc001488790) (0xc002b235e0) Stream removed, broadcasting: 1 I0310 21:08:01.186457 6 log.go:172] (0xc001488790) (0xc002b23680) Stream removed, broadcasting: 3 I0310 21:08:01.186470 6 log.go:172] (0xc001488790) (0xc002b237c0) Stream removed, broadcasting: 5 Mar 10 21:08:01.186: INFO: Exec stderr: "" Mar 10 21:08:01.186: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7920 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:01.186: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:01.210756 6 log.go:172] (0xc001104fd0) (0xc00171ad20) Create stream I0310 21:08:01.210777 6 log.go:172] (0xc001104fd0) (0xc00171ad20) Stream added, broadcasting: 1 I0310 21:08:01.216121 6 log.go:172] (0xc001104fd0) Reply frame received for 1 I0310 21:08:01.216169 6 log.go:172] (0xc001104fd0) (0xc00184aa00) Create stream I0310 21:08:01.216182 6 log.go:172] (0xc001104fd0) (0xc00184aa00) Stream added, broadcasting: 3 I0310 21:08:01.217872 6 log.go:172] (0xc001104fd0) Reply frame received for 3 I0310 21:08:01.217899 6 log.go:172] (0xc001104fd0) (0xc00171ae60) Create stream I0310 21:08:01.217906 6 log.go:172] (0xc001104fd0) (0xc00171ae60) Stream added, broadcasting: 5 I0310 21:08:01.218672 6 log.go:172] (0xc001104fd0) Reply frame received for 5 I0310 21:08:01.276923 6 log.go:172] (0xc001104fd0) Data frame received for 5 I0310 21:08:01.276961 6 log.go:172] (0xc00171ae60) (5) Data frame handling I0310 21:08:01.276988 6 log.go:172] (0xc001104fd0) Data frame received for 3 I0310 21:08:01.277004 6 log.go:172] (0xc00184aa00) (3) Data frame handling I0310 21:08:01.277013 6 log.go:172] (0xc00184aa00) (3) Data frame sent I0310 21:08:01.277022 6 log.go:172] (0xc001104fd0) Data frame received for 3 I0310 21:08:01.277027 6 log.go:172] (0xc00184aa00) (3) Data frame handling I0310 21:08:01.277739 6 log.go:172] (0xc001104fd0) Data frame received for 1 I0310 21:08:01.277751 6 log.go:172] (0xc00171ad20) (1) Data frame handling I0310 21:08:01.277762 6 log.go:172] (0xc00171ad20) (1) Data frame sent I0310 21:08:01.277794 6 log.go:172] (0xc001104fd0) (0xc00171ad20) Stream removed, broadcasting: 1 I0310 21:08:01.277873 6 log.go:172] (0xc001104fd0) Go away received I0310 21:08:01.277922 6 log.go:172] (0xc001104fd0) (0xc00171ad20) Stream removed, broadcasting: 1 I0310 21:08:01.277951 6 log.go:172] (0xc001104fd0) (0xc00184aa00) Stream removed, broadcasting: 3 I0310 21:08:01.277964 6 log.go:172] (0xc001104fd0) (0xc00171ae60) Stream removed, broadcasting: 5 Mar 10 21:08:01.277: INFO: Exec stderr: "" Mar 10 21:08:01.277: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7920 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:01.278: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:01.300912 6 log.go:172] (0xc0011053f0) (0xc00171b0e0) Create stream I0310 21:08:01.300935 6 log.go:172] (0xc0011053f0) (0xc00171b0e0) Stream added, broadcasting: 1 I0310 21:08:01.302682 6 log.go:172] (0xc0011053f0) Reply frame received for 1 I0310 21:08:01.302709 6 log.go:172] (0xc0011053f0) (0xc00171b220) Create stream I0310 21:08:01.302730 6 log.go:172] (0xc0011053f0) (0xc00171b220) Stream added, broadcasting: 3 I0310 21:08:01.303609 6 log.go:172] (0xc0011053f0) Reply frame received for 3 I0310 21:08:01.303644 6 log.go:172] (0xc0011053f0) (0xc00184ab40) Create stream I0310 21:08:01.303652 6 log.go:172] (0xc0011053f0) (0xc00184ab40) Stream added, broadcasting: 5 I0310 21:08:01.304286 6 log.go:172] (0xc0011053f0) Reply frame received for 5 I0310 21:08:01.368524 6 log.go:172] (0xc0011053f0) Data frame received for 5 I0310 21:08:01.368553 6 log.go:172] (0xc00184ab40) (5) Data frame handling I0310 21:08:01.368569 6 log.go:172] (0xc0011053f0) Data frame received for 3 I0310 21:08:01.368575 6 log.go:172] (0xc00171b220) (3) Data frame handling I0310 21:08:01.368583 6 log.go:172] (0xc00171b220) (3) Data frame sent I0310 21:08:01.368590 6 log.go:172] (0xc0011053f0) Data frame received for 3 I0310 21:08:01.368596 6 log.go:172] (0xc00171b220) (3) Data frame handling I0310 21:08:01.369648 6 log.go:172] (0xc0011053f0) Data frame received for 1 I0310 21:08:01.369691 6 log.go:172] (0xc00171b0e0) (1) Data frame handling I0310 21:08:01.369719 6 log.go:172] (0xc00171b0e0) (1) Data frame sent I0310 21:08:01.369737 6 log.go:172] (0xc0011053f0) (0xc00171b0e0) Stream removed, broadcasting: 1 I0310 21:08:01.369757 6 log.go:172] (0xc0011053f0) Go away received I0310 21:08:01.369906 6 log.go:172] (0xc0011053f0) (0xc00171b0e0) Stream removed, broadcasting: 1 I0310 21:08:01.369935 6 log.go:172] (0xc0011053f0) (0xc00171b220) Stream removed, broadcasting: 3 I0310 21:08:01.369943 6 log.go:172] (0xc0011053f0) (0xc00184ab40) Stream removed, broadcasting: 5 Mar 10 21:08:01.369: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:01.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7920" for this suite. • [SLOW TEST:9.068 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:01.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:08:01.483: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:02.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6247" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":2,"skipped":44,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:02.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-b455z in namespace proxy-3593 I0310 21:08:02.286805 6 runners.go:189] Created replication controller with name: proxy-service-b455z, namespace: proxy-3593, replica count: 1 I0310 21:08:03.337431 6 runners.go:189] proxy-service-b455z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0310 21:08:04.337695 6 runners.go:189] proxy-service-b455z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0310 21:08:05.337918 6 runners.go:189] proxy-service-b455z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0310 21:08:06.338138 6 runners.go:189] proxy-service-b455z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0310 21:08:07.338388 6 runners.go:189] proxy-service-b455z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0310 21:08:08.338590 6 runners.go:189] proxy-service-b455z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0310 21:08:09.338812 6 runners.go:189] proxy-service-b455z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0310 21:08:10.338969 6 runners.go:189] proxy-service-b455z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0310 21:08:11.339152 6 runners.go:189] proxy-service-b455z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0310 21:08:12.339400 6 runners.go:189] proxy-service-b455z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0310 21:08:13.339618 6 runners.go:189] proxy-service-b455z Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 10 21:08:13.342: INFO: setup took 11.084717274s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 10 21:08:13.358: INFO: (0) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 15.382177ms) Mar 10 21:08:13.358: INFO: (0) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 15.91498ms) Mar 10 21:08:13.358: INFO: (0) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:1080/proxy/: ... (200; 15.876033ms) Mar 10 21:08:13.359: INFO: (0) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 16.592292ms) Mar 10 21:08:13.360: INFO: (0) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 17.3252ms) Mar 10 21:08:13.360: INFO: (0) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 16.838625ms) Mar 10 21:08:13.360: INFO: (0) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 17.597112ms) Mar 10 21:08:13.360: INFO: (0) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 17.476359ms) Mar 10 21:08:13.360: INFO: (0) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 17.584058ms) Mar 10 21:08:13.360: INFO: (0) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 17.606441ms) Mar 10 21:08:13.365: INFO: (0) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 22.285792ms) Mar 10 21:08:13.367: INFO: (0) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 25.027522ms) Mar 10 21:08:13.368: INFO: (0) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 25.222988ms) Mar 10 21:08:13.368: INFO: (0) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 25.271808ms) Mar 10 21:08:13.368: INFO: (0) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname2/proxy/: tls qux (200; 25.191735ms) Mar 10 21:08:13.369: INFO: (0) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: test (200; 4.871925ms) Mar 10 21:08:13.375: INFO: (1) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 5.546097ms) Mar 10 21:08:13.375: INFO: (1) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 5.7264ms) Mar 10 21:08:13.375: INFO: (1) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 5.635781ms) Mar 10 21:08:13.375: INFO: (1) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 5.660442ms) Mar 10 21:08:13.375: INFO: (1) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 5.814241ms) Mar 10 21:08:13.375: INFO: (1) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 5.892509ms) Mar 10 21:08:13.375: INFO: (1) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:1080/proxy/: ... (200; 5.799ms) Mar 10 21:08:13.375: INFO: (1) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 5.901259ms) Mar 10 21:08:13.376: INFO: (1) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname2/proxy/: tls qux (200; 6.730209ms) Mar 10 21:08:13.377: INFO: (1) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 7.176096ms) Mar 10 21:08:13.377: INFO: (1) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 7.207854ms) Mar 10 21:08:13.377: INFO: (1) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 7.155164ms) Mar 10 21:08:13.377: INFO: (1) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 7.201054ms) Mar 10 21:08:13.377: INFO: (1) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 7.272221ms) Mar 10 21:08:13.382: INFO: (2) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 5.500189ms) Mar 10 21:08:13.383: INFO: (2) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 5.496169ms) Mar 10 21:08:13.383: INFO: (2) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 5.463677ms) Mar 10 21:08:13.383: INFO: (2) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 5.49431ms) Mar 10 21:08:13.383: INFO: (2) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 5.518012ms) Mar 10 21:08:13.383: INFO: (2) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 5.569428ms) Mar 10 21:08:13.383: INFO: (2) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 5.613733ms) Mar 10 21:08:13.383: INFO: (2) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: ... (200; 5.573513ms) Mar 10 21:08:13.383: INFO: (2) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 5.907582ms) Mar 10 21:08:13.384: INFO: (2) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 6.618577ms) Mar 10 21:08:13.384: INFO: (2) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 6.715412ms) Mar 10 21:08:13.384: INFO: (2) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 7.143788ms) Mar 10 21:08:13.385: INFO: (2) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 7.567209ms) Mar 10 21:08:13.385: INFO: (2) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname2/proxy/: tls qux (200; 7.442188ms) Mar 10 21:08:13.385: INFO: (2) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 8.173838ms) Mar 10 21:08:13.393: INFO: (3) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 7.325019ms) Mar 10 21:08:13.393: INFO: (3) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 7.953818ms) Mar 10 21:08:13.393: INFO: (3) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 8.086457ms) Mar 10 21:08:13.393: INFO: (3) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 8.085423ms) Mar 10 21:08:13.393: INFO: (3) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 8.118275ms) Mar 10 21:08:13.393: INFO: (3) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 8.092926ms) Mar 10 21:08:13.393: INFO: (3) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 8.183692ms) Mar 10 21:08:13.393: INFO: (3) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname2/proxy/: tls qux (200; 8.090146ms) Mar 10 21:08:13.393: INFO: (3) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:1080/proxy/: ... (200; 8.087125ms) Mar 10 21:08:13.393: INFO: (3) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 8.207618ms) Mar 10 21:08:13.393: INFO: (3) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: test<... (200; 8.138894ms) Mar 10 21:08:13.394: INFO: (3) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 8.183483ms) Mar 10 21:08:13.394: INFO: (3) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 8.157664ms) Mar 10 21:08:13.396: INFO: (4) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 2.603758ms) Mar 10 21:08:13.397: INFO: (4) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 3.167356ms) Mar 10 21:08:13.397: INFO: (4) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 3.224845ms) Mar 10 21:08:13.398: INFO: (4) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 4.183613ms) Mar 10 21:08:13.398: INFO: (4) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 4.250667ms) Mar 10 21:08:13.398: INFO: (4) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 4.201426ms) Mar 10 21:08:13.398: INFO: (4) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 4.509083ms) Mar 10 21:08:13.398: INFO: (4) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:1080/proxy/: ... (200; 4.653421ms) Mar 10 21:08:13.398: INFO: (4) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 4.616588ms) Mar 10 21:08:13.398: INFO: (4) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: ... (200; 4.179947ms) Mar 10 21:08:13.405: INFO: (5) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 4.675708ms) Mar 10 21:08:13.405: INFO: (5) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 4.731712ms) Mar 10 21:08:13.405: INFO: (5) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 4.736355ms) Mar 10 21:08:13.405: INFO: (5) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 4.735875ms) Mar 10 21:08:13.405: INFO: (5) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 4.707065ms) Mar 10 21:08:13.405: INFO: (5) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 4.803645ms) Mar 10 21:08:13.405: INFO: (5) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 4.78693ms) Mar 10 21:08:13.405: INFO: (5) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: test (200; 1.979845ms) Mar 10 21:08:13.411: INFO: (6) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 3.957037ms) Mar 10 21:08:13.411: INFO: (6) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 4.038805ms) Mar 10 21:08:13.411: INFO: (6) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: ... (200; 4.538681ms) Mar 10 21:08:13.412: INFO: (6) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 4.556433ms) Mar 10 21:08:13.412: INFO: (6) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 4.608716ms) Mar 10 21:08:13.412: INFO: (6) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 4.657185ms) Mar 10 21:08:13.413: INFO: (6) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname2/proxy/: tls qux (200; 5.401178ms) Mar 10 21:08:13.413: INFO: (6) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 5.556078ms) Mar 10 21:08:13.413: INFO: (6) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 5.616569ms) Mar 10 21:08:13.413: INFO: (6) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 5.616167ms) Mar 10 21:08:13.416: INFO: (7) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 3.138332ms) Mar 10 21:08:13.416: INFO: (7) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 3.417452ms) Mar 10 21:08:13.417: INFO: (7) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 3.669417ms) Mar 10 21:08:13.417: INFO: (7) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:1080/proxy/: ... (200; 3.762441ms) Mar 10 21:08:13.417: INFO: (7) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 3.871214ms) Mar 10 21:08:13.417: INFO: (7) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 4.11327ms) Mar 10 21:08:13.417: INFO: (7) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 4.31581ms) Mar 10 21:08:13.417: INFO: (7) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 4.365769ms) Mar 10 21:08:13.418: INFO: (7) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 4.652145ms) Mar 10 21:08:13.418: INFO: (7) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname2/proxy/: tls qux (200; 4.791853ms) Mar 10 21:08:13.418: INFO: (7) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 4.721114ms) Mar 10 21:08:13.419: INFO: (7) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 5.751112ms) Mar 10 21:08:13.419: INFO: (7) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 6.038282ms) Mar 10 21:08:13.419: INFO: (7) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 6.02798ms) Mar 10 21:08:13.419: INFO: (7) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: test (200; 3.841344ms) Mar 10 21:08:13.424: INFO: (8) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:1080/proxy/: ... (200; 4.563364ms) Mar 10 21:08:13.424: INFO: (8) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 4.187933ms) Mar 10 21:08:13.424: INFO: (8) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 4.527601ms) Mar 10 21:08:13.424: INFO: (8) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 3.862749ms) Mar 10 21:08:13.428: INFO: (8) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 8.847028ms) Mar 10 21:08:13.429: INFO: (8) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 8.134577ms) Mar 10 21:08:13.429: INFO: (8) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 8.888756ms) Mar 10 21:08:13.429: INFO: (8) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 8.962822ms) Mar 10 21:08:13.429: INFO: (8) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 8.812696ms) Mar 10 21:08:13.429: INFO: (8) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname2/proxy/: tls qux (200; 9.063918ms) Mar 10 21:08:13.434: INFO: (9) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 4.7285ms) Mar 10 21:08:13.434: INFO: (9) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 4.70188ms) Mar 10 21:08:13.434: INFO: (9) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: ... (200; 4.776877ms) Mar 10 21:08:13.434: INFO: (9) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 4.838151ms) Mar 10 21:08:13.434: INFO: (9) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 4.822133ms) Mar 10 21:08:13.434: INFO: (9) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 4.819487ms) Mar 10 21:08:13.434: INFO: (9) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 4.863045ms) Mar 10 21:08:13.435: INFO: (9) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 5.754905ms) Mar 10 21:08:13.435: INFO: (9) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 5.872557ms) Mar 10 21:08:13.435: INFO: (9) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 5.855603ms) Mar 10 21:08:13.435: INFO: (9) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname2/proxy/: tls qux (200; 5.807484ms) Mar 10 21:08:13.435: INFO: (9) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 5.885185ms) Mar 10 21:08:13.435: INFO: (9) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 5.805989ms) Mar 10 21:08:13.437: INFO: (10) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 2.173403ms) Mar 10 21:08:13.437: INFO: (10) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 2.555043ms) Mar 10 21:08:13.437: INFO: (10) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 2.556476ms) Mar 10 21:08:13.439: INFO: (10) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 4.167082ms) Mar 10 21:08:13.439: INFO: (10) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 4.279531ms) Mar 10 21:08:13.439: INFO: (10) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 4.404973ms) Mar 10 21:08:13.439: INFO: (10) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 4.592655ms) Mar 10 21:08:13.440: INFO: (10) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:1080/proxy/: ... (200; 4.721023ms) Mar 10 21:08:13.440: INFO: (10) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 4.703945ms) Mar 10 21:08:13.440: INFO: (10) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 5.345165ms) Mar 10 21:08:13.440: INFO: (10) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 5.323954ms) Mar 10 21:08:13.440: INFO: (10) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: ... (200; 2.491492ms) Mar 10 21:08:13.444: INFO: (11) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 3.24232ms) Mar 10 21:08:13.444: INFO: (11) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 3.213907ms) Mar 10 21:08:13.444: INFO: (11) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 3.206863ms) Mar 10 21:08:13.444: INFO: (11) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 3.270076ms) Mar 10 21:08:13.444: INFO: (11) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 3.344078ms) Mar 10 21:08:13.444: INFO: (11) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 3.528919ms) Mar 10 21:08:13.444: INFO: (11) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 3.647391ms) Mar 10 21:08:13.445: INFO: (11) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: ... (200; 3.701188ms) Mar 10 21:08:13.452: INFO: (12) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 4.045601ms) Mar 10 21:08:13.452: INFO: (12) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 3.342974ms) Mar 10 21:08:13.452: INFO: (12) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 3.296678ms) Mar 10 21:08:13.452: INFO: (12) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 4.082964ms) Mar 10 21:08:13.452: INFO: (12) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 3.94607ms) Mar 10 21:08:13.452: INFO: (12) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: ... (200; 5.256279ms) Mar 10 21:08:13.458: INFO: (13) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 5.347964ms) Mar 10 21:08:13.458: INFO: (13) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 5.27031ms) Mar 10 21:08:13.458: INFO: (13) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 5.370111ms) Mar 10 21:08:13.458: INFO: (13) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 5.339325ms) Mar 10 21:08:13.458: INFO: (13) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 5.370383ms) Mar 10 21:08:13.458: INFO: (13) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 5.408801ms) Mar 10 21:08:13.458: INFO: (13) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: test (200; 5.579799ms) Mar 10 21:08:13.462: INFO: (14) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: test (200; 3.452388ms) Mar 10 21:08:13.462: INFO: (14) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 2.872905ms) Mar 10 21:08:13.462: INFO: (14) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 3.416984ms) Mar 10 21:08:13.462: INFO: (14) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 3.056641ms) Mar 10 21:08:13.463: INFO: (14) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 3.704739ms) Mar 10 21:08:13.463: INFO: (14) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 3.284148ms) Mar 10 21:08:13.463: INFO: (14) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 3.469399ms) Mar 10 21:08:13.463: INFO: (14) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 3.929719ms) Mar 10 21:08:13.463: INFO: (14) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 4.096404ms) Mar 10 21:08:13.463: INFO: (14) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 4.252623ms) Mar 10 21:08:13.463: INFO: (14) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:1080/proxy/: ... (200; 4.186064ms) Mar 10 21:08:13.464: INFO: (14) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 4.350455ms) Mar 10 21:08:13.464: INFO: (14) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 4.823808ms) Mar 10 21:08:13.464: INFO: (14) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname2/proxy/: tls qux (200; 5.39612ms) Mar 10 21:08:13.466: INFO: (15) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 1.871609ms) Mar 10 21:08:13.466: INFO: (15) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 2.079625ms) Mar 10 21:08:13.466: INFO: (15) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 2.443143ms) Mar 10 21:08:13.468: INFO: (15) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 3.493452ms) Mar 10 21:08:13.468: INFO: (15) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 3.634291ms) Mar 10 21:08:13.468: INFO: (15) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 3.901429ms) Mar 10 21:08:13.468: INFO: (15) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 4.140082ms) Mar 10 21:08:13.469: INFO: (15) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 4.448585ms) Mar 10 21:08:13.469: INFO: (15) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:1080/proxy/: ... (200; 4.473887ms) Mar 10 21:08:13.469: INFO: (15) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 4.451411ms) Mar 10 21:08:13.469: INFO: (15) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: ... (200; 3.215505ms) Mar 10 21:08:13.472: INFO: (16) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 3.260166ms) Mar 10 21:08:13.472: INFO: (16) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 3.287885ms) Mar 10 21:08:13.472: INFO: (16) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: test (200; 3.297487ms) Mar 10 21:08:13.472: INFO: (16) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 3.414116ms) Mar 10 21:08:13.472: INFO: (16) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 3.552106ms) Mar 10 21:08:13.472: INFO: (16) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 3.562247ms) Mar 10 21:08:13.472: INFO: (16) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 3.526834ms) Mar 10 21:08:13.473: INFO: (16) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 3.700754ms) Mar 10 21:08:13.473: INFO: (16) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 3.821091ms) Mar 10 21:08:13.473: INFO: (16) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 3.897738ms) Mar 10 21:08:13.473: INFO: (16) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 3.930766ms) Mar 10 21:08:13.475: INFO: (17) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 2.034513ms) Mar 10 21:08:13.476: INFO: (17) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 3.540704ms) Mar 10 21:08:13.476: INFO: (17) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 3.503192ms) Mar 10 21:08:13.476: INFO: (17) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 3.630366ms) Mar 10 21:08:13.477: INFO: (17) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 4.121032ms) Mar 10 21:08:13.477: INFO: (17) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 4.17936ms) Mar 10 21:08:13.477: INFO: (17) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname2/proxy/: tls qux (200; 4.251477ms) Mar 10 21:08:13.477: INFO: (17) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 4.198604ms) Mar 10 21:08:13.477: INFO: (17) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: ... (200; 4.230795ms) Mar 10 21:08:13.477: INFO: (17) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 4.366475ms) Mar 10 21:08:13.478: INFO: (17) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 4.857324ms) Mar 10 21:08:13.478: INFO: (17) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 5.076691ms) Mar 10 21:08:13.478: INFO: (17) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 5.096551ms) Mar 10 21:08:13.478: INFO: (17) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 5.053522ms) Mar 10 21:08:13.482: INFO: (18) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:1080/proxy/: test<... (200; 4.043082ms) Mar 10 21:08:13.482: INFO: (18) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:1080/proxy/: ... (200; 3.984711ms) Mar 10 21:08:13.483: INFO: (18) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:160/proxy/: foo (200; 4.821002ms) Mar 10 21:08:13.483: INFO: (18) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:160/proxy/: foo (200; 4.883345ms) Mar 10 21:08:13.483: INFO: (18) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 5.0942ms) Mar 10 21:08:13.483: INFO: (18) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 5.469022ms) Mar 10 21:08:13.484: INFO: (18) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: test<... (200; 2.639306ms) Mar 10 21:08:13.487: INFO: (19) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:462/proxy/: tls qux (200; 2.782307ms) Mar 10 21:08:13.487: INFO: (19) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf:162/proxy/: bar (200; 2.79428ms) Mar 10 21:08:13.488: INFO: (19) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:460/proxy/: tls baz (200; 2.957484ms) Mar 10 21:08:13.488: INFO: (19) /api/v1/namespaces/proxy-3593/pods/https:proxy-service-b455z-dzkcf:443/proxy/: ... (200; 2.955542ms) Mar 10 21:08:13.488: INFO: (19) /api/v1/namespaces/proxy-3593/pods/http:proxy-service-b455z-dzkcf:162/proxy/: bar (200; 3.138581ms) Mar 10 21:08:13.488: INFO: (19) /api/v1/namespaces/proxy-3593/pods/proxy-service-b455z-dzkcf/proxy/: test (200; 3.118514ms) Mar 10 21:08:13.488: INFO: (19) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname1/proxy/: foo (200; 3.22005ms) Mar 10 21:08:13.489: INFO: (19) /api/v1/namespaces/proxy-3593/services/proxy-service-b455z:portname2/proxy/: bar (200; 3.933974ms) Mar 10 21:08:13.489: INFO: (19) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname2/proxy/: bar (200; 4.089487ms) Mar 10 21:08:13.489: INFO: (19) /api/v1/namespaces/proxy-3593/services/http:proxy-service-b455z:portname1/proxy/: foo (200; 4.022407ms) Mar 10 21:08:13.489: INFO: (19) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname1/proxy/: tls baz (200; 4.148781ms) Mar 10 21:08:13.489: INFO: (19) /api/v1/namespaces/proxy-3593/services/https:proxy-service-b455z:tlsportname2/proxy/: tls qux (200; 4.191166ms) STEP: deleting ReplicationController proxy-service-b455z in namespace proxy-3593, will wait for the garbage collector to delete the pods Mar 10 21:08:13.566: INFO: Deleting ReplicationController proxy-service-b455z took: 24.901434ms Mar 10 21:08:13.866: INFO: Terminating ReplicationController proxy-service-b455z pods took: 300.22099ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:26.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3593" for this suite. • [SLOW TEST:24.072 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":3,"skipped":51,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:26.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:30.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2164" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":67,"failed":0} SS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:30.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 10 21:08:30.366: INFO: Created pod &Pod{ObjectMeta:{dns-369 dns-369 /api/v1/namespaces/dns-369/pods/dns-369 bb7f3384-ab75-4511-9bab-ed5115cbffca 667084 0 2020-03-10 21:08:30 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qqhqb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qqhqb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qqhqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 10 21:08:32.388: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-369 PodName:dns-369 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:32.388: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:32.420213 6 log.go:172] (0xc002ae22c0) (0xc002b22820) Create stream I0310 21:08:32.420243 6 log.go:172] (0xc002ae22c0) (0xc002b22820) Stream added, broadcasting: 1 I0310 21:08:32.421906 6 log.go:172] (0xc002ae22c0) Reply frame received for 1 I0310 21:08:32.421947 6 log.go:172] (0xc002ae22c0) (0xc000b43860) Create stream I0310 21:08:32.421977 6 log.go:172] (0xc002ae22c0) (0xc000b43860) Stream added, broadcasting: 3 I0310 21:08:32.422958 6 log.go:172] (0xc002ae22c0) Reply frame received for 3 I0310 21:08:32.422989 6 log.go:172] (0xc002ae22c0) (0xc000190280) Create stream I0310 21:08:32.423001 6 log.go:172] (0xc002ae22c0) (0xc000190280) Stream added, broadcasting: 5 I0310 21:08:32.423990 6 log.go:172] (0xc002ae22c0) Reply frame received for 5 I0310 21:08:32.493533 6 log.go:172] (0xc002ae22c0) Data frame received for 3 I0310 21:08:32.493554 6 log.go:172] (0xc000b43860) (3) Data frame handling I0310 21:08:32.493561 6 log.go:172] (0xc000b43860) (3) Data frame sent I0310 21:08:32.494102 6 log.go:172] (0xc002ae22c0) Data frame received for 3 I0310 21:08:32.494158 6 log.go:172] (0xc000b43860) (3) Data frame handling I0310 21:08:32.494198 6 log.go:172] (0xc002ae22c0) Data frame received for 5 I0310 21:08:32.494222 6 log.go:172] (0xc000190280) (5) Data frame handling I0310 21:08:32.495893 6 log.go:172] (0xc002ae22c0) Data frame received for 1 I0310 21:08:32.495910 6 log.go:172] (0xc002b22820) (1) Data frame handling I0310 21:08:32.495921 6 log.go:172] (0xc002b22820) (1) Data frame sent I0310 21:08:32.495935 6 log.go:172] (0xc002ae22c0) (0xc002b22820) Stream removed, broadcasting: 1 I0310 21:08:32.495968 6 log.go:172] (0xc002ae22c0) Go away received I0310 21:08:32.496014 6 log.go:172] (0xc002ae22c0) (0xc002b22820) Stream removed, broadcasting: 1 I0310 21:08:32.496027 6 log.go:172] (0xc002ae22c0) (0xc000b43860) Stream removed, broadcasting: 3 I0310 21:08:32.496038 6 log.go:172] (0xc002ae22c0) (0xc000190280) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 10 21:08:32.496: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-369 PodName:dns-369 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:08:32.496: INFO: >>> kubeConfig: /root/.kube/config I0310 21:08:32.528274 6 log.go:172] (0xc002ae2bb0) (0xc002b22a00) Create stream I0310 21:08:32.528303 6 log.go:172] (0xc002ae2bb0) (0xc002b22a00) Stream added, broadcasting: 1 I0310 21:08:32.530063 6 log.go:172] (0xc002ae2bb0) Reply frame received for 1 I0310 21:08:32.530106 6 log.go:172] (0xc002ae2bb0) (0xc002b22aa0) Create stream I0310 21:08:32.530162 6 log.go:172] (0xc002ae2bb0) (0xc002b22aa0) Stream added, broadcasting: 3 I0310 21:08:32.531232 6 log.go:172] (0xc002ae2bb0) Reply frame received for 3 I0310 21:08:32.531299 6 log.go:172] (0xc002ae2bb0) (0xc002b22b40) Create stream I0310 21:08:32.531319 6 log.go:172] (0xc002ae2bb0) (0xc002b22b40) Stream added, broadcasting: 5 I0310 21:08:32.532119 6 log.go:172] (0xc002ae2bb0) Reply frame received for 5 I0310 21:08:32.606683 6 log.go:172] (0xc002ae2bb0) Data frame received for 3 I0310 21:08:32.606708 6 log.go:172] (0xc002b22aa0) (3) Data frame handling I0310 21:08:32.606730 6 log.go:172] (0xc002b22aa0) (3) Data frame sent I0310 21:08:32.607225 6 log.go:172] (0xc002ae2bb0) Data frame received for 5 I0310 21:08:32.607243 6 log.go:172] (0xc002b22b40) (5) Data frame handling I0310 21:08:32.607343 6 log.go:172] (0xc002ae2bb0) Data frame received for 3 I0310 21:08:32.607368 6 log.go:172] (0xc002b22aa0) (3) Data frame handling I0310 21:08:32.608960 6 log.go:172] (0xc002ae2bb0) Data frame received for 1 I0310 21:08:32.608981 6 log.go:172] (0xc002b22a00) (1) Data frame handling I0310 21:08:32.608996 6 log.go:172] (0xc002b22a00) (1) Data frame sent I0310 21:08:32.609013 6 log.go:172] (0xc002ae2bb0) (0xc002b22a00) Stream removed, broadcasting: 1 I0310 21:08:32.609028 6 log.go:172] (0xc002ae2bb0) Go away received I0310 21:08:32.609221 6 log.go:172] (0xc002ae2bb0) (0xc002b22a00) Stream removed, broadcasting: 1 I0310 21:08:32.609250 6 log.go:172] (0xc002ae2bb0) (0xc002b22aa0) Stream removed, broadcasting: 3 I0310 21:08:32.609271 6 log.go:172] (0xc002ae2bb0) (0xc002b22b40) Stream removed, broadcasting: 5 Mar 10 21:08:32.609: INFO: Deleting pod dns-369... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:32.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-369" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":5,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:32.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:08:32.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81ddeec2-c384-43b2-b5a0-9049a0a2e0ff" in namespace "projected-9385" to be "success or failure" Mar 10 21:08:32.938: INFO: Pod "downwardapi-volume-81ddeec2-c384-43b2-b5a0-9049a0a2e0ff": Phase="Pending", Reason="", readiness=false. Elapsed: 88.912509ms Mar 10 21:08:34.940: INFO: Pod "downwardapi-volume-81ddeec2-c384-43b2-b5a0-9049a0a2e0ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.091055891s STEP: Saw pod success Mar 10 21:08:34.940: INFO: Pod "downwardapi-volume-81ddeec2-c384-43b2-b5a0-9049a0a2e0ff" satisfied condition "success or failure" Mar 10 21:08:34.942: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-81ddeec2-c384-43b2-b5a0-9049a0a2e0ff container client-container: STEP: delete the pod Mar 10 21:08:34.966: INFO: Waiting for pod downwardapi-volume-81ddeec2-c384-43b2-b5a0-9049a0a2e0ff to disappear Mar 10 21:08:35.009: INFO: Pod downwardapi-volume-81ddeec2-c384-43b2-b5a0-9049a0a2e0ff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:35.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9385" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":92,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:35.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:08:35.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e6d3a47-6587-42d1-a092-638b266df228" in namespace "projected-765" to be "success or failure" Mar 10 21:08:35.067: INFO: Pod "downwardapi-volume-9e6d3a47-6587-42d1-a092-638b266df228": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2868ms Mar 10 21:08:37.070: INFO: Pod "downwardapi-volume-9e6d3a47-6587-42d1-a092-638b266df228": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005487907s STEP: Saw pod success Mar 10 21:08:37.070: INFO: Pod "downwardapi-volume-9e6d3a47-6587-42d1-a092-638b266df228" satisfied condition "success or failure" Mar 10 21:08:37.073: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9e6d3a47-6587-42d1-a092-638b266df228 container client-container: STEP: delete the pod Mar 10 21:08:37.131: INFO: Waiting for pod downwardapi-volume-9e6d3a47-6587-42d1-a092-638b266df228 to disappear Mar 10 21:08:37.139: INFO: Pod downwardapi-volume-9e6d3a47-6587-42d1-a092-638b266df228 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:37.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-765" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":128,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:37.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 10 21:08:37.203: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 10 21:08:37.217: INFO: Waiting for terminating namespaces to be deleted... Mar 10 21:08:37.219: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 10 21:08:37.223: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-7920 started at 2020-03-10 21:07:56 +0000 UTC (2 container statuses recorded) Mar 10 21:08:37.223: INFO: Container busybox-1 ready: true, restart count 0 Mar 10 21:08:37.223: INFO: Container busybox-2 ready: true, restart count 0 Mar 10 21:08:37.223: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:08:37.223: INFO: Container kube-proxy ready: true, restart count 0 Mar 10 21:08:37.223: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:08:37.223: INFO: Container kindnet-cni ready: true, restart count 0 Mar 10 21:08:37.223: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 10 21:08:37.236: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:08:37.236: INFO: Container kube-proxy ready: true, restart count 0 Mar 10 21:08:37.236: INFO: test-pod from e2e-kubelet-etc-hosts-7920 started at 2020-03-10 21:07:52 +0000 UTC (3 container statuses recorded) Mar 10 21:08:37.236: INFO: Container busybox-1 ready: true, restart count 0 Mar 10 21:08:37.236: INFO: Container busybox-2 ready: true, restart count 0 Mar 10 21:08:37.236: INFO: Container busybox-3 ready: true, restart count 0 Mar 10 21:08:37.236: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:08:37.236: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fb0d7b530f496a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:38.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9360" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":8,"skipped":134,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:38.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:08:39.028: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 10 21:08:41.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471319, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471319, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471319, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471319, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:08:44.090: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:08:44.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:45.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3187" for this suite. STEP: Destroying namespace "webhook-3187-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.129 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":9,"skipped":155,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:45.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 10 21:08:45.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-822' Mar 10 21:08:47.770: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 10 21:08:47.770: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Mar 10 21:08:47.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-822' Mar 10 21:08:47.893: INFO: stderr: "" Mar 10 21:08:47.893: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:47.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-822" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":10,"skipped":166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:47.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 10 21:08:47.983: INFO: Waiting up to 5m0s for pod "client-containers-957bdfcd-8bf2-4e61-9d3f-c1486cc932a2" in namespace "containers-3816" to be "success or failure" Mar 10 21:08:47.992: INFO: Pod "client-containers-957bdfcd-8bf2-4e61-9d3f-c1486cc932a2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.414208ms Mar 10 21:08:49.995: INFO: Pod "client-containers-957bdfcd-8bf2-4e61-9d3f-c1486cc932a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01242501s STEP: Saw pod success Mar 10 21:08:49.995: INFO: Pod "client-containers-957bdfcd-8bf2-4e61-9d3f-c1486cc932a2" satisfied condition "success or failure" Mar 10 21:08:49.997: INFO: Trying to get logs from node jerma-worker2 pod client-containers-957bdfcd-8bf2-4e61-9d3f-c1486cc932a2 container test-container: STEP: delete the pod Mar 10 21:08:50.016: INFO: Waiting for pod client-containers-957bdfcd-8bf2-4e61-9d3f-c1486cc932a2 to disappear Mar 10 21:08:50.021: INFO: Pod client-containers-957bdfcd-8bf2-4e61-9d3f-c1486cc932a2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:50.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3816" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":203,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:50.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-02e8ca7f-c789-433b-8f24-73b042daf4d6 STEP: Creating a pod to test consume secrets Mar 10 21:08:50.119: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fdab88b5-120d-4769-b76a-01ede1eb17b7" in namespace "projected-271" to be "success or failure" Mar 10 21:08:50.144: INFO: Pod "pod-projected-secrets-fdab88b5-120d-4769-b76a-01ede1eb17b7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.33256ms Mar 10 21:08:52.147: INFO: Pod "pod-projected-secrets-fdab88b5-120d-4769-b76a-01ede1eb17b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.028159515s STEP: Saw pod success Mar 10 21:08:52.147: INFO: Pod "pod-projected-secrets-fdab88b5-120d-4769-b76a-01ede1eb17b7" satisfied condition "success or failure" Mar 10 21:08:52.149: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-fdab88b5-120d-4769-b76a-01ede1eb17b7 container projected-secret-volume-test: STEP: delete the pod Mar 10 21:08:52.203: INFO: Waiting for pod pod-projected-secrets-fdab88b5-120d-4769-b76a-01ede1eb17b7 to disappear Mar 10 21:08:52.212: INFO: Pod pod-projected-secrets-fdab88b5-120d-4769-b76a-01ede1eb17b7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:08:52.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-271" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":203,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:08:52.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-09f43a53-9462-41f7-a8de-461720d3999e in namespace container-probe-6620 Mar 10 21:08:54.274: INFO: Started pod test-webserver-09f43a53-9462-41f7-a8de-461720d3999e in namespace container-probe-6620 STEP: checking the pod's current state and verifying that restartCount is present Mar 10 21:08:54.277: INFO: Initial restart count of pod test-webserver-09f43a53-9462-41f7-a8de-461720d3999e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:12:54.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6620" for this suite. • [SLOW TEST:242.633 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":211,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:12:54.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:12:54.957: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 10 21:13:00.036: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 10 21:13:00.036: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 10 21:13:02.040: INFO: Creating deployment "test-rollover-deployment" Mar 10 21:13:02.083: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 10 21:13:04.089: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 10 21:13:04.095: INFO: Ensure that both replica sets have 1 created replica Mar 10 21:13:04.101: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 10 21:13:04.106: INFO: Updating deployment test-rollover-deployment Mar 10 21:13:04.106: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 10 21:13:06.117: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 10 21:13:06.120: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 10 21:13:06.125: INFO: all replica sets need to contain the pod-template-hash label Mar 10 21:13:06.125: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471584, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 10 21:13:08.132: INFO: all replica sets need to contain the pod-template-hash label Mar 10 21:13:08.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 10 21:13:10.185: INFO: all replica sets need to contain the pod-template-hash label Mar 10 21:13:10.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 10 21:13:12.132: INFO: all replica sets need to contain the pod-template-hash label Mar 10 21:13:12.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 10 21:13:14.132: INFO: all replica sets need to contain the pod-template-hash label Mar 10 21:13:14.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 10 21:13:16.131: INFO: all replica sets need to contain the pod-template-hash label Mar 10 21:13:16.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471582, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 10 21:13:18.140: INFO: Mar 10 21:13:18.140: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 10 21:13:18.147: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2537 /apis/apps/v1/namespaces/deployment-2537/deployments/test-rollover-deployment 33dff72e-2f84-47be-83ed-97d9ab61c17d 668275 2 2020-03-10 21:13:02 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001bf5ed8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-10 21:13:02 +0000 UTC,LastTransitionTime:2020-03-10 21:13:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-10 21:13:16 +0000 UTC,LastTransitionTime:2020-03-10 21:13:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 10 21:13:18.150: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-2537 /apis/apps/v1/namespaces/deployment-2537/replicasets/test-rollover-deployment-574d6dfbff ab5ccbac-31b1-4061-86d1-551044ef9757 668264 2 2020-03-10 21:13:04 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 33dff72e-2f84-47be-83ed-97d9ab61c17d 0xc002b0a5d7 0xc002b0a5d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b0a6b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 10 21:13:18.150: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 10 21:13:18.151: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2537 /apis/apps/v1/namespaces/deployment-2537/replicasets/test-rollover-controller 7f044724-112b-4339-9764-a61605dfe88b 668273 2 2020-03-10 21:12:54 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 33dff72e-2f84-47be-83ed-97d9ab61c17d 0xc002b0a437 0xc002b0a438}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002b0a518 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 10 21:13:18.151: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-2537 /apis/apps/v1/namespaces/deployment-2537/replicasets/test-rollover-deployment-f6c94f66c 4a391a02-0a78-4555-8759-984320972e0c 668220 2 2020-03-10 21:13:02 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 33dff72e-2f84-47be-83ed-97d9ab61c17d 0xc002b0a7c0 0xc002b0a7c1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b0a888 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 10 21:13:18.153: INFO: Pod "test-rollover-deployment-574d6dfbff-qz8sj" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-qz8sj test-rollover-deployment-574d6dfbff- deployment-2537 /api/v1/namespaces/deployment-2537/pods/test-rollover-deployment-574d6dfbff-qz8sj cff5bfa5-b7a4-4290-9bfa-65f707fa05c3 668232 0 2020-03-10 21:13:04 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff ab5ccbac-31b1-4061-86d1-551044ef9757 0xc002b0b167 0xc002b0b168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jm6qm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jm6qm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jm6qm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 21:13:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 21:13:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 21:13:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 21:13:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.156,StartTime:2020-03-10 21:13:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 21:13:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://185b3bbffa32f0bffd1d430757915d107f30ed8c271ea3cfc15cc095417991d9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.156,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:13:18.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2537" for this suite. • [SLOW TEST:23.311 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":14,"skipped":232,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:13:18.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-40b9e995-7e3d-4513-bfe4-fa72b6906dc5 STEP: Creating a pod to test consume configMaps Mar 10 21:13:18.221: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba5b825f-16d1-43e2-8c49-4b0387506668" in namespace "configmap-49" to be "success or failure" Mar 10 21:13:18.225: INFO: Pod "pod-configmaps-ba5b825f-16d1-43e2-8c49-4b0387506668": Phase="Pending", Reason="", readiness=false. Elapsed: 3.85524ms Mar 10 21:13:20.229: INFO: Pod "pod-configmaps-ba5b825f-16d1-43e2-8c49-4b0387506668": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007139324s STEP: Saw pod success Mar 10 21:13:20.229: INFO: Pod "pod-configmaps-ba5b825f-16d1-43e2-8c49-4b0387506668" satisfied condition "success or failure" Mar 10 21:13:20.231: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ba5b825f-16d1-43e2-8c49-4b0387506668 container configmap-volume-test: STEP: delete the pod Mar 10 21:13:20.273: INFO: Waiting for pod pod-configmaps-ba5b825f-16d1-43e2-8c49-4b0387506668 to disappear Mar 10 21:13:20.285: INFO: Pod pod-configmaps-ba5b825f-16d1-43e2-8c49-4b0387506668 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:13:20.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-49" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":242,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:13:20.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:13:20.417: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 10 21:13:25.420: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 10 21:13:25.420: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 10 21:13:25.480: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8675 /apis/apps/v1/namespaces/deployment-8675/deployments/test-cleanup-deployment c54d2b85-7cdb-4029-89af-ae948e87b9cc 668380 1 2020-03-10 21:13:25 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000cb3028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 10 21:13:25.538: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-8675 /apis/apps/v1/namespaces/deployment-8675/replicasets/test-cleanup-deployment-55ffc6b7b6 65a09b88-9b52-4388-81fb-4e249babca1c 668386 1 2020-03-10 21:13:25 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c54d2b85-7cdb-4029-89af-ae948e87b9cc 0xc000cb3447 0xc000cb3448}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000cb34f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 10 21:13:25.538: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 10 21:13:25.538: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-8675 /apis/apps/v1/namespaces/deployment-8675/replicasets/test-cleanup-controller 5e7a01d2-8a95-4763-8621-18680c53f9e0 668381 1 2020-03-10 21:13:20 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment c54d2b85-7cdb-4029-89af-ae948e87b9cc 0xc000cb3377 0xc000cb3378}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000cb33d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 10 21:13:25.552: INFO: Pod "test-cleanup-controller-68lhj" is available: &Pod{ObjectMeta:{test-cleanup-controller-68lhj test-cleanup-controller- deployment-8675 /api/v1/namespaces/deployment-8675/pods/test-cleanup-controller-68lhj 4b7d53ed-b28a-4b69-9b85-131ce5ced9dc 668326 0 2020-03-10 21:13:20 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 5e7a01d2-8a95-4763-8621-18680c53f9e0 0xc000cb3d27 0xc000cb3d28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dhq96,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dhq96,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dhq96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 21:13:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 21:13:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 21:13:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 21:13:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.174,StartTime:2020-03-10 21:13:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 21:13:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7542aa44e12d0f5e51cda2c84c085277afd382a15157e37de3c0d1496a1cb2c4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 21:13:25.552: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-vp9jt" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-vp9jt test-cleanup-deployment-55ffc6b7b6- deployment-8675 /api/v1/namespaces/deployment-8675/pods/test-cleanup-deployment-55ffc6b7b6-vp9jt 4bc6b6ad-0702-4aa6-9fee-69f2eaa0c9d4 668388 0 2020-03-10 21:13:25 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 65a09b88-9b52-4388-81fb-4e249babca1c 0xc000cb3f37 0xc000cb3f38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dhq96,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dhq96,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dhq96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 21:13:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:13:25.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8675" for this suite. • [SLOW TEST:5.283 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":16,"skipped":250,"failed":0} [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:13:25.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 10 21:13:27.714: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 10 21:13:37.815: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:13:37.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4307" for this suite. • [SLOW TEST:12.251 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":17,"skipped":250,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:13:37.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:13:37.911: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c5a3dd8-8d86-48a7-b70a-e7c0e3095b74" in namespace "projected-8762" to be "success or failure" Mar 10 21:13:37.914: INFO: Pod "downwardapi-volume-6c5a3dd8-8d86-48a7-b70a-e7c0e3095b74": Phase="Pending", Reason="", readiness=false. Elapsed: 3.754438ms Mar 10 21:13:39.918: INFO: Pod "downwardapi-volume-6c5a3dd8-8d86-48a7-b70a-e7c0e3095b74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007535172s STEP: Saw pod success Mar 10 21:13:39.918: INFO: Pod "downwardapi-volume-6c5a3dd8-8d86-48a7-b70a-e7c0e3095b74" satisfied condition "success or failure" Mar 10 21:13:39.921: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6c5a3dd8-8d86-48a7-b70a-e7c0e3095b74 container client-container: STEP: delete the pod Mar 10 21:13:39.947: INFO: Waiting for pod downwardapi-volume-6c5a3dd8-8d86-48a7-b70a-e7c0e3095b74 to disappear Mar 10 21:13:39.951: INFO: Pod downwardapi-volume-6c5a3dd8-8d86-48a7-b70a-e7c0e3095b74 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:13:39.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8762" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":250,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:13:39.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-7d9134c3-99ec-4d57-9390-07169747826d Mar 10 21:13:40.045: INFO: Pod name my-hostname-basic-7d9134c3-99ec-4d57-9390-07169747826d: Found 0 pods out of 1 Mar 10 21:13:45.049: INFO: Pod name my-hostname-basic-7d9134c3-99ec-4d57-9390-07169747826d: Found 1 pods out of 1 Mar 10 21:13:45.049: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7d9134c3-99ec-4d57-9390-07169747826d" are running Mar 10 21:13:45.055: INFO: Pod "my-hostname-basic-7d9134c3-99ec-4d57-9390-07169747826d-s78xl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-10 21:13:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-10 21:13:41 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-10 21:13:41 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-10 21:13:40 +0000 UTC Reason: Message:}]) Mar 10 21:13:45.055: INFO: Trying to dial the pod Mar 10 21:13:50.071: INFO: Controller my-hostname-basic-7d9134c3-99ec-4d57-9390-07169747826d: Got expected result from replica 1 [my-hostname-basic-7d9134c3-99ec-4d57-9390-07169747826d-s78xl]: "my-hostname-basic-7d9134c3-99ec-4d57-9390-07169747826d-s78xl", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:13:50.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3158" for this suite. • [SLOW TEST:10.119 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":19,"skipped":261,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:13:50.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 10 21:13:50.200: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6720 /api/v1/namespaces/watch-6720/configmaps/e2e-watch-test-watch-closed 4cc3b844-38a3-4dc0-883a-e6da005beb27 668570 0 2020-03-10 21:13:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 10 21:13:50.200: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6720 /api/v1/namespaces/watch-6720/configmaps/e2e-watch-test-watch-closed 4cc3b844-38a3-4dc0-883a-e6da005beb27 668572 0 2020-03-10 21:13:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 10 21:13:50.215: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6720 /api/v1/namespaces/watch-6720/configmaps/e2e-watch-test-watch-closed 4cc3b844-38a3-4dc0-883a-e6da005beb27 668573 0 2020-03-10 21:13:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 10 21:13:50.215: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6720 /api/v1/namespaces/watch-6720/configmaps/e2e-watch-test-watch-closed 4cc3b844-38a3-4dc0-883a-e6da005beb27 668574 0 2020-03-10 21:13:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:13:50.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6720" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":20,"skipped":278,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:13:50.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:13:50.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9080" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:13:50.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-1266a8ae-6a0d-4e81-b8f2-82e03749ddf2 STEP: Creating a pod to test consume configMaps Mar 10 21:13:50.452: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a65c8f3-e31b-4490-b47a-1108bd1c6395" in namespace "projected-308" to be "success or failure" Mar 10 21:13:50.475: INFO: Pod "pod-projected-configmaps-6a65c8f3-e31b-4490-b47a-1108bd1c6395": Phase="Pending", Reason="", readiness=false. Elapsed: 22.202537ms Mar 10 21:13:52.479: INFO: Pod "pod-projected-configmaps-6a65c8f3-e31b-4490-b47a-1108bd1c6395": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026256834s STEP: Saw pod success Mar 10 21:13:52.479: INFO: Pod "pod-projected-configmaps-6a65c8f3-e31b-4490-b47a-1108bd1c6395" satisfied condition "success or failure" Mar 10 21:13:52.482: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-6a65c8f3-e31b-4490-b47a-1108bd1c6395 container projected-configmap-volume-test: STEP: delete the pod Mar 10 21:13:52.517: INFO: Waiting for pod pod-projected-configmaps-6a65c8f3-e31b-4490-b47a-1108bd1c6395 to disappear Mar 10 21:13:52.526: INFO: Pod pod-projected-configmaps-6a65c8f3-e31b-4490-b47a-1108bd1c6395 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:13:52.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-308" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":313,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:13:52.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0310 21:14:02.766395 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 10 21:14:02.766: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:14:02.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7244" for this suite. • [SLOW TEST:10.241 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":23,"skipped":322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:14:02.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0310 21:14:32.924571 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 10 21:14:32.924: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:14:32.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2083" for this suite. • [SLOW TEST:30.158 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":24,"skipped":362,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:14:32.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:14:33.074: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"fcc622c4-8018-4e0e-94d5-e78132ba51df", Controller:(*bool)(0xc001bf4562), BlockOwnerDeletion:(*bool)(0xc001bf4563)}} Mar 10 21:14:33.093: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"31598d11-e3e8-452c-bdce-29c630c98c3d", Controller:(*bool)(0xc002bc5062), BlockOwnerDeletion:(*bool)(0xc002bc5063)}} Mar 10 21:14:33.103: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5a808d52-4aff-4469-a89c-5d5fbdcbe2dd", Controller:(*bool)(0xc001bf470a), BlockOwnerDeletion:(*bool)(0xc001bf470b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:14:38.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4028" for this suite. • [SLOW TEST:5.320 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":25,"skipped":375,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:14:38.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5950 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-5950 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5950 Mar 10 21:14:38.455: INFO: Found 0 stateful pods, waiting for 1 Mar 10 21:14:48.458: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 10 21:14:48.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5950 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 10 21:14:48.646: INFO: stderr: "I0310 21:14:48.563135 107 log.go:172] (0xc0000f49a0) (0xc00079c000) Create stream\nI0310 21:14:48.563174 107 log.go:172] (0xc0000f49a0) (0xc00079c000) Stream added, broadcasting: 1\nI0310 21:14:48.564871 107 log.go:172] (0xc0000f49a0) Reply frame received for 1\nI0310 21:14:48.564897 107 log.go:172] (0xc0000f49a0) (0xc0007ea000) Create stream\nI0310 21:14:48.564905 107 log.go:172] (0xc0000f49a0) (0xc0007ea000) Stream added, broadcasting: 3\nI0310 21:14:48.565508 107 log.go:172] (0xc0000f49a0) Reply frame received for 3\nI0310 21:14:48.565536 107 log.go:172] (0xc0000f49a0) (0xc0007ea0a0) Create stream\nI0310 21:14:48.565542 107 log.go:172] (0xc0000f49a0) (0xc0007ea0a0) Stream added, broadcasting: 5\nI0310 21:14:48.566155 107 log.go:172] (0xc0000f49a0) Reply frame received for 5\nI0310 21:14:48.618676 107 log.go:172] (0xc0000f49a0) Data frame received for 5\nI0310 21:14:48.618697 107 log.go:172] (0xc0007ea0a0) (5) Data frame handling\nI0310 21:14:48.618708 107 log.go:172] (0xc0007ea0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0310 21:14:48.642619 107 log.go:172] (0xc0000f49a0) Data frame received for 5\nI0310 21:14:48.642649 107 log.go:172] (0xc0007ea0a0) (5) Data frame handling\nI0310 21:14:48.642666 107 log.go:172] (0xc0000f49a0) Data frame received for 3\nI0310 21:14:48.642673 107 log.go:172] (0xc0007ea000) (3) Data frame handling\nI0310 21:14:48.642679 107 log.go:172] (0xc0007ea000) (3) Data frame sent\nI0310 21:14:48.642687 107 log.go:172] (0xc0000f49a0) Data frame received for 3\nI0310 21:14:48.642691 107 log.go:172] (0xc0007ea000) (3) Data frame handling\nI0310 21:14:48.643621 107 log.go:172] (0xc0000f49a0) Data frame received for 1\nI0310 21:14:48.643636 107 log.go:172] (0xc00079c000) (1) Data frame handling\nI0310 21:14:48.643645 107 log.go:172] (0xc00079c000) (1) Data frame sent\nI0310 21:14:48.643653 107 log.go:172] (0xc0000f49a0) (0xc00079c000) Stream removed, broadcasting: 1\nI0310 21:14:48.643662 107 log.go:172] (0xc0000f49a0) Go away received\nI0310 21:14:48.643921 107 log.go:172] (0xc0000f49a0) (0xc00079c000) Stream removed, broadcasting: 1\nI0310 21:14:48.643932 107 log.go:172] (0xc0000f49a0) (0xc0007ea000) Stream removed, broadcasting: 3\nI0310 21:14:48.643937 107 log.go:172] (0xc0000f49a0) (0xc0007ea0a0) Stream removed, broadcasting: 5\n" Mar 10 21:14:48.646: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 10 21:14:48.646: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 10 21:14:48.648: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 10 21:14:58.652: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 10 21:14:58.652: INFO: Waiting for statefulset status.replicas updated to 0 Mar 10 21:14:58.666: INFO: POD NODE PHASE GRACE CONDITIONS Mar 10 21:14:58.666: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC }] Mar 10 21:14:58.666: INFO: Mar 10 21:14:58.666: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 10 21:14:59.695: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990878786s Mar 10 21:15:00.699: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961538249s Mar 10 21:15:01.703: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.957798156s Mar 10 21:15:02.708: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.953361218s Mar 10 21:15:03.712: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.949195607s Mar 10 21:15:04.719: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.944858601s Mar 10 21:15:05.724: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.937621797s Mar 10 21:15:06.729: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.932931964s Mar 10 21:15:07.733: INFO: Verifying statefulset ss doesn't scale past 3 for another 928.189078ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5950 Mar 10 21:15:08.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5950 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 10 21:15:08.931: INFO: stderr: "I0310 21:15:08.863199 126 log.go:172] (0xc000909130) (0xc00094c5a0) Create stream\nI0310 21:15:08.863235 126 log.go:172] (0xc000909130) (0xc00094c5a0) Stream added, broadcasting: 1\nI0310 21:15:08.866721 126 log.go:172] (0xc000909130) Reply frame received for 1\nI0310 21:15:08.866749 126 log.go:172] (0xc000909130) (0xc000715ae0) Create stream\nI0310 21:15:08.866765 126 log.go:172] (0xc000909130) (0xc000715ae0) Stream added, broadcasting: 3\nI0310 21:15:08.867366 126 log.go:172] (0xc000909130) Reply frame received for 3\nI0310 21:15:08.867387 126 log.go:172] (0xc000909130) (0xc00094c000) Create stream\nI0310 21:15:08.867394 126 log.go:172] (0xc000909130) (0xc00094c000) Stream added, broadcasting: 5\nI0310 21:15:08.867948 126 log.go:172] (0xc000909130) Reply frame received for 5\nI0310 21:15:08.923189 126 log.go:172] (0xc000909130) Data frame received for 3\nI0310 21:15:08.923265 126 log.go:172] (0xc000715ae0) (3) Data frame handling\nI0310 21:15:08.923290 126 log.go:172] (0xc000715ae0) (3) Data frame sent\nI0310 21:15:08.923388 126 log.go:172] (0xc000909130) Data frame received for 3\nI0310 21:15:08.923419 126 log.go:172] (0xc000715ae0) (3) Data frame handling\nI0310 21:15:08.926332 126 log.go:172] (0xc000909130) Data frame received for 5\nI0310 21:15:08.926374 126 log.go:172] (0xc00094c000) (5) Data frame handling\nI0310 21:15:08.926408 126 log.go:172] (0xc00094c000) (5) Data frame sent\nI0310 21:15:08.926446 126 log.go:172] (0xc000909130) Data frame received for 5\nI0310 21:15:08.926479 126 log.go:172] (0xc00094c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0310 21:15:08.928258 126 log.go:172] (0xc000909130) Data frame received for 1\nI0310 21:15:08.928307 126 log.go:172] (0xc00094c5a0) (1) Data frame handling\nI0310 21:15:08.928338 126 log.go:172] (0xc00094c5a0) (1) Data frame sent\nI0310 21:15:08.928374 126 log.go:172] (0xc000909130) (0xc00094c5a0) Stream removed, broadcasting: 1\nI0310 21:15:08.928416 126 log.go:172] (0xc000909130) Go away received\nI0310 21:15:08.928638 126 log.go:172] (0xc000909130) (0xc00094c5a0) Stream removed, broadcasting: 1\nI0310 21:15:08.928654 126 log.go:172] (0xc000909130) (0xc000715ae0) Stream removed, broadcasting: 3\nI0310 21:15:08.928660 126 log.go:172] (0xc000909130) (0xc00094c000) Stream removed, broadcasting: 5\n" Mar 10 21:15:08.931: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 10 21:15:08.931: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 10 21:15:08.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5950 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 10 21:15:09.077: INFO: stderr: "I0310 21:15:09.020394 145 log.go:172] (0xc0009cf130) (0xc0006d5f40) Create stream\nI0310 21:15:09.020424 145 log.go:172] (0xc0009cf130) (0xc0006d5f40) Stream added, broadcasting: 1\nI0310 21:15:09.023479 145 log.go:172] (0xc0009cf130) Reply frame received for 1\nI0310 21:15:09.023519 145 log.go:172] (0xc0009cf130) (0xc000a5e0a0) Create stream\nI0310 21:15:09.023529 145 log.go:172] (0xc0009cf130) (0xc000a5e0a0) Stream added, broadcasting: 3\nI0310 21:15:09.024569 145 log.go:172] (0xc0009cf130) Reply frame received for 3\nI0310 21:15:09.024602 145 log.go:172] (0xc0009cf130) (0xc0009b4000) Create stream\nI0310 21:15:09.024610 145 log.go:172] (0xc0009cf130) (0xc0009b4000) Stream added, broadcasting: 5\nI0310 21:15:09.025433 145 log.go:172] (0xc0009cf130) Reply frame received for 5\nI0310 21:15:09.074163 145 log.go:172] (0xc0009cf130) Data frame received for 3\nI0310 21:15:09.074185 145 log.go:172] (0xc000a5e0a0) (3) Data frame handling\nI0310 21:15:09.074197 145 log.go:172] (0xc000a5e0a0) (3) Data frame sent\nI0310 21:15:09.074382 145 log.go:172] (0xc0009cf130) Data frame received for 3\nI0310 21:15:09.074391 145 log.go:172] (0xc000a5e0a0) (3) Data frame handling\nI0310 21:15:09.074406 145 log.go:172] (0xc0009cf130) Data frame received for 5\nI0310 21:15:09.074419 145 log.go:172] (0xc0009b4000) (5) Data frame handling\nI0310 21:15:09.074430 145 log.go:172] (0xc0009b4000) (5) Data frame sent\nI0310 21:15:09.074437 145 log.go:172] (0xc0009cf130) Data frame received for 5\nI0310 21:15:09.074444 145 log.go:172] (0xc0009b4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0310 21:15:09.075126 145 log.go:172] (0xc0009cf130) Data frame received for 1\nI0310 21:15:09.075146 145 log.go:172] (0xc0006d5f40) (1) Data frame handling\nI0310 21:15:09.075157 145 log.go:172] (0xc0006d5f40) (1) Data frame sent\nI0310 21:15:09.075169 145 log.go:172] (0xc0009cf130) (0xc0006d5f40) Stream removed, broadcasting: 1\nI0310 21:15:09.075227 145 log.go:172] (0xc0009cf130) Go away received\nI0310 21:15:09.075381 145 log.go:172] (0xc0009cf130) (0xc0006d5f40) Stream removed, broadcasting: 1\nI0310 21:15:09.075390 145 log.go:172] (0xc0009cf130) (0xc000a5e0a0) Stream removed, broadcasting: 3\nI0310 21:15:09.075395 145 log.go:172] (0xc0009cf130) (0xc0009b4000) Stream removed, broadcasting: 5\n" Mar 10 21:15:09.077: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 10 21:15:09.077: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 10 21:15:09.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5950 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 10 21:15:09.219: INFO: stderr: "I0310 21:15:09.156361 163 log.go:172] (0xc0005de840) (0xc0005d4000) Create stream\nI0310 21:15:09.156394 163 log.go:172] (0xc0005de840) (0xc0005d4000) Stream added, broadcasting: 1\nI0310 21:15:09.157886 163 log.go:172] (0xc0005de840) Reply frame received for 1\nI0310 21:15:09.157908 163 log.go:172] (0xc0005de840) (0xc0006dbae0) Create stream\nI0310 21:15:09.157913 163 log.go:172] (0xc0005de840) (0xc0006dbae0) Stream added, broadcasting: 3\nI0310 21:15:09.158411 163 log.go:172] (0xc0005de840) Reply frame received for 3\nI0310 21:15:09.158432 163 log.go:172] (0xc0005de840) (0xc000206000) Create stream\nI0310 21:15:09.158439 163 log.go:172] (0xc0005de840) (0xc000206000) Stream added, broadcasting: 5\nI0310 21:15:09.158868 163 log.go:172] (0xc0005de840) Reply frame received for 5\nI0310 21:15:09.215086 163 log.go:172] (0xc0005de840) Data frame received for 5\nI0310 21:15:09.215105 163 log.go:172] (0xc000206000) (5) Data frame handling\nI0310 21:15:09.215112 163 log.go:172] (0xc000206000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0310 21:15:09.215154 163 log.go:172] (0xc0005de840) Data frame received for 3\nI0310 21:15:09.215160 163 log.go:172] (0xc0006dbae0) (3) Data frame handling\nI0310 21:15:09.215172 163 log.go:172] (0xc0006dbae0) (3) Data frame sent\nI0310 21:15:09.215183 163 log.go:172] (0xc0005de840) Data frame received for 3\nI0310 21:15:09.215190 163 log.go:172] (0xc0006dbae0) (3) Data frame handling\nI0310 21:15:09.215206 163 log.go:172] (0xc0005de840) Data frame received for 5\nI0310 21:15:09.215213 163 log.go:172] (0xc000206000) (5) Data frame handling\nI0310 21:15:09.216031 163 log.go:172] (0xc0005de840) Data frame received for 1\nI0310 21:15:09.216047 163 log.go:172] (0xc0005d4000) (1) Data frame handling\nI0310 21:15:09.216060 163 log.go:172] (0xc0005d4000) (1) Data frame sent\nI0310 21:15:09.216074 163 log.go:172] (0xc0005de840) (0xc0005d4000) Stream removed, broadcasting: 1\nI0310 21:15:09.216088 163 log.go:172] (0xc0005de840) Go away received\nI0310 21:15:09.216424 163 log.go:172] (0xc0005de840) (0xc0005d4000) Stream removed, broadcasting: 1\nI0310 21:15:09.216442 163 log.go:172] (0xc0005de840) (0xc0006dbae0) Stream removed, broadcasting: 3\nI0310 21:15:09.216451 163 log.go:172] (0xc0005de840) (0xc000206000) Stream removed, broadcasting: 5\n" Mar 10 21:15:09.219: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 10 21:15:09.219: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 10 21:15:09.221: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 10 21:15:19.224: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 10 21:15:19.224: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 10 21:15:19.224: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 10 21:15:19.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5950 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 10 21:15:19.366: INFO: stderr: "I0310 21:15:19.313380 183 log.go:172] (0xc0009940b0) (0xc000687b80) Create stream\nI0310 21:15:19.313417 183 log.go:172] (0xc0009940b0) (0xc000687b80) Stream added, broadcasting: 1\nI0310 21:15:19.316459 183 log.go:172] (0xc0009940b0) Reply frame received for 1\nI0310 21:15:19.316494 183 log.go:172] (0xc0009940b0) (0xc0005da5a0) Create stream\nI0310 21:15:19.316505 183 log.go:172] (0xc0009940b0) (0xc0005da5a0) Stream added, broadcasting: 3\nI0310 21:15:19.317079 183 log.go:172] (0xc0009940b0) Reply frame received for 3\nI0310 21:15:19.317098 183 log.go:172] (0xc0009940b0) (0xc00051d360) Create stream\nI0310 21:15:19.317106 183 log.go:172] (0xc0009940b0) (0xc00051d360) Stream added, broadcasting: 5\nI0310 21:15:19.317656 183 log.go:172] (0xc0009940b0) Reply frame received for 5\nI0310 21:15:19.362972 183 log.go:172] (0xc0009940b0) Data frame received for 5\nI0310 21:15:19.362989 183 log.go:172] (0xc00051d360) (5) Data frame handling\nI0310 21:15:19.362995 183 log.go:172] (0xc00051d360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0310 21:15:19.363004 183 log.go:172] (0xc0009940b0) Data frame received for 3\nI0310 21:15:19.363008 183 log.go:172] (0xc0005da5a0) (3) Data frame handling\nI0310 21:15:19.363013 183 log.go:172] (0xc0005da5a0) (3) Data frame sent\nI0310 21:15:19.363019 183 log.go:172] (0xc0009940b0) Data frame received for 3\nI0310 21:15:19.363024 183 log.go:172] (0xc0005da5a0) (3) Data frame handling\nI0310 21:15:19.363075 183 log.go:172] (0xc0009940b0) Data frame received for 5\nI0310 21:15:19.363106 183 log.go:172] (0xc00051d360) (5) Data frame handling\nI0310 21:15:19.364242 183 log.go:172] (0xc0009940b0) Data frame received for 1\nI0310 21:15:19.364251 183 log.go:172] (0xc000687b80) (1) Data frame handling\nI0310 21:15:19.364259 183 log.go:172] (0xc000687b80) (1) Data frame sent\nI0310 21:15:19.364348 183 log.go:172] (0xc0009940b0) (0xc000687b80) Stream removed, broadcasting: 1\nI0310 21:15:19.364375 183 log.go:172] (0xc0009940b0) Go away received\nI0310 21:15:19.364550 183 log.go:172] (0xc0009940b0) (0xc000687b80) Stream removed, broadcasting: 1\nI0310 21:15:19.364558 183 log.go:172] (0xc0009940b0) (0xc0005da5a0) Stream removed, broadcasting: 3\nI0310 21:15:19.364563 183 log.go:172] (0xc0009940b0) (0xc00051d360) Stream removed, broadcasting: 5\n" Mar 10 21:15:19.367: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 10 21:15:19.367: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 10 21:15:19.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5950 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 10 21:15:19.516: INFO: stderr: "I0310 21:15:19.447050 204 log.go:172] (0xc000a900b0) (0xc0009c4000) Create stream\nI0310 21:15:19.447082 204 log.go:172] (0xc000a900b0) (0xc0009c4000) Stream added, broadcasting: 1\nI0310 21:15:19.448972 204 log.go:172] (0xc000a900b0) Reply frame received for 1\nI0310 21:15:19.449004 204 log.go:172] (0xc000a900b0) (0xc0009f8000) Create stream\nI0310 21:15:19.449013 204 log.go:172] (0xc000a900b0) (0xc0009f8000) Stream added, broadcasting: 3\nI0310 21:15:19.449673 204 log.go:172] (0xc000a900b0) Reply frame received for 3\nI0310 21:15:19.449700 204 log.go:172] (0xc000a900b0) (0xc0009c40a0) Create stream\nI0310 21:15:19.449713 204 log.go:172] (0xc000a900b0) (0xc0009c40a0) Stream added, broadcasting: 5\nI0310 21:15:19.450351 204 log.go:172] (0xc000a900b0) Reply frame received for 5\nI0310 21:15:19.492884 204 log.go:172] (0xc000a900b0) Data frame received for 5\nI0310 21:15:19.492900 204 log.go:172] (0xc0009c40a0) (5) Data frame handling\nI0310 21:15:19.492906 204 log.go:172] (0xc0009c40a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0310 21:15:19.512092 204 log.go:172] (0xc000a900b0) Data frame received for 5\nI0310 21:15:19.512109 204 log.go:172] (0xc0009c40a0) (5) Data frame handling\nI0310 21:15:19.512146 204 log.go:172] (0xc000a900b0) Data frame received for 3\nI0310 21:15:19.512163 204 log.go:172] (0xc0009f8000) (3) Data frame handling\nI0310 21:15:19.512173 204 log.go:172] (0xc0009f8000) (3) Data frame sent\nI0310 21:15:19.512180 204 log.go:172] (0xc000a900b0) Data frame received for 3\nI0310 21:15:19.512184 204 log.go:172] (0xc0009f8000) (3) Data frame handling\nI0310 21:15:19.513499 204 log.go:172] (0xc000a900b0) Data frame received for 1\nI0310 21:15:19.513510 204 log.go:172] (0xc0009c4000) (1) Data frame handling\nI0310 21:15:19.513516 204 log.go:172] (0xc0009c4000) (1) Data frame sent\nI0310 21:15:19.513522 204 log.go:172] (0xc000a900b0) (0xc0009c4000) Stream removed, broadcasting: 1\nI0310 21:15:19.513580 204 log.go:172] (0xc000a900b0) Go away received\nI0310 21:15:19.513717 204 log.go:172] (0xc000a900b0) (0xc0009c4000) Stream removed, broadcasting: 1\nI0310 21:15:19.513726 204 log.go:172] (0xc000a900b0) (0xc0009f8000) Stream removed, broadcasting: 3\nI0310 21:15:19.513731 204 log.go:172] (0xc000a900b0) (0xc0009c40a0) Stream removed, broadcasting: 5\n" Mar 10 21:15:19.516: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 10 21:15:19.516: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 10 21:15:19.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5950 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 10 21:15:19.738: INFO: stderr: "I0310 21:15:19.624179 225 log.go:172] (0xc000b85340) (0xc000c1e5a0) Create stream\nI0310 21:15:19.624216 225 log.go:172] (0xc000b85340) (0xc000c1e5a0) Stream added, broadcasting: 1\nI0310 21:15:19.629720 225 log.go:172] (0xc000b85340) Reply frame received for 1\nI0310 21:15:19.629897 225 log.go:172] (0xc000b85340) (0xc0009a8140) Create stream\nI0310 21:15:19.629922 225 log.go:172] (0xc000b85340) (0xc0009a8140) Stream added, broadcasting: 3\nI0310 21:15:19.631133 225 log.go:172] (0xc000b85340) Reply frame received for 3\nI0310 21:15:19.631170 225 log.go:172] (0xc000b85340) (0xc000a06140) Create stream\nI0310 21:15:19.631184 225 log.go:172] (0xc000b85340) (0xc000a06140) Stream added, broadcasting: 5\nI0310 21:15:19.632011 225 log.go:172] (0xc000b85340) Reply frame received for 5\nI0310 21:15:19.687971 225 log.go:172] (0xc000b85340) Data frame received for 5\nI0310 21:15:19.687991 225 log.go:172] (0xc000a06140) (5) Data frame handling\nI0310 21:15:19.688006 225 log.go:172] (0xc000a06140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0310 21:15:19.733234 225 log.go:172] (0xc000b85340) Data frame received for 3\nI0310 21:15:19.733256 225 log.go:172] (0xc0009a8140) (3) Data frame handling\nI0310 21:15:19.733280 225 log.go:172] (0xc0009a8140) (3) Data frame sent\nI0310 21:15:19.733289 225 log.go:172] (0xc000b85340) Data frame received for 3\nI0310 21:15:19.733295 225 log.go:172] (0xc0009a8140) (3) Data frame handling\nI0310 21:15:19.733443 225 log.go:172] (0xc000b85340) Data frame received for 5\nI0310 21:15:19.733463 225 log.go:172] (0xc000a06140) (5) Data frame handling\nI0310 21:15:19.734768 225 log.go:172] (0xc000b85340) Data frame received for 1\nI0310 21:15:19.734785 225 log.go:172] (0xc000c1e5a0) (1) Data frame handling\nI0310 21:15:19.734795 225 log.go:172] (0xc000c1e5a0) (1) Data frame sent\nI0310 21:15:19.734805 225 log.go:172] (0xc000b85340) (0xc000c1e5a0) Stream removed, broadcasting: 1\nI0310 21:15:19.734816 225 log.go:172] (0xc000b85340) Go away received\nI0310 21:15:19.735217 225 log.go:172] (0xc000b85340) (0xc000c1e5a0) Stream removed, broadcasting: 1\nI0310 21:15:19.735238 225 log.go:172] (0xc000b85340) (0xc0009a8140) Stream removed, broadcasting: 3\nI0310 21:15:19.735252 225 log.go:172] (0xc000b85340) (0xc000a06140) Stream removed, broadcasting: 5\n" Mar 10 21:15:19.738: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 10 21:15:19.738: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 10 21:15:19.738: INFO: Waiting for statefulset status.replicas updated to 0 Mar 10 21:15:19.759: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 10 21:15:29.765: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 10 21:15:29.765: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 10 21:15:29.765: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 10 21:15:29.788: INFO: POD NODE PHASE GRACE CONDITIONS Mar 10 21:15:29.788: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC }] Mar 10 21:15:29.788: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:29.788: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:29.788: INFO: Mar 10 21:15:29.788: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 10 21:15:30.793: INFO: POD NODE PHASE GRACE CONDITIONS Mar 10 21:15:30.793: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC }] Mar 10 21:15:30.793: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:30.793: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:30.793: INFO: Mar 10 21:15:30.793: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 10 21:15:31.797: INFO: POD NODE PHASE GRACE CONDITIONS Mar 10 21:15:31.797: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC }] Mar 10 21:15:31.798: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:31.798: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:31.798: INFO: Mar 10 21:15:31.798: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 10 21:15:32.802: INFO: POD NODE PHASE GRACE CONDITIONS Mar 10 21:15:32.802: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC }] Mar 10 21:15:32.802: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:32.802: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:32.802: INFO: Mar 10 21:15:32.802: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 10 21:15:33.808: INFO: POD NODE PHASE GRACE CONDITIONS Mar 10 21:15:33.808: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC }] Mar 10 21:15:33.808: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:33.808: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:33.808: INFO: Mar 10 21:15:33.808: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 10 21:15:34.812: INFO: POD NODE PHASE GRACE CONDITIONS Mar 10 21:15:34.812: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC }] Mar 10 21:15:34.812: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:34.812: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:34.812: INFO: Mar 10 21:15:34.812: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 10 21:15:35.818: INFO: POD NODE PHASE GRACE CONDITIONS Mar 10 21:15:35.818: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:38 +0000 UTC }] Mar 10 21:15:35.818: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:35.818: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:15:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-10 21:14:58 +0000 UTC }] Mar 10 21:15:35.818: INFO: Mar 10 21:15:35.818: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 10 21:15:36.822: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.952319262s Mar 10 21:15:37.825: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.948625226s Mar 10 21:15:38.829: INFO: Verifying statefulset ss doesn't scale past 0 for another 945.176155ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5950 Mar 10 21:15:39.832: INFO: Scaling statefulset ss to 0 Mar 10 21:15:39.839: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 10 21:15:39.841: INFO: Deleting all statefulset in ns statefulset-5950 Mar 10 21:15:39.843: INFO: Scaling statefulset ss to 0 Mar 10 21:15:39.850: INFO: Waiting for statefulset status.replicas updated to 0 Mar 10 21:15:39.852: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:15:39.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5950" for this suite. • [SLOW TEST:61.646 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":26,"skipped":377,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:15:39.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 10 21:15:42.523: INFO: Successfully updated pod "annotationupdatee787367b-1078-42fc-929d-4b36d3eb46ab" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:15:46.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9241" for this suite. • [SLOW TEST:6.663 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:15:46.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:15:47.347: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 10 21:15:49.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471747, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471747, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471747, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719471747, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:15:52.408: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:15:52.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1092-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:15:53.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-392" for this suite. STEP: Destroying namespace "webhook-392-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.112 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":28,"skipped":434,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:15:53.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:16:10.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6851" for this suite. • [SLOW TEST:17.137 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":29,"skipped":437,"failed":0} SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:16:10.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 10 21:16:12.976: INFO: Pod pod-hostip-5b47090a-0bf3-4128-bbd8-6383d1903a23 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:16:12.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7705" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:16:12.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 10 21:16:15.610: INFO: Successfully updated pod "adopt-release-65t8h" STEP: Checking that the Job readopts the Pod Mar 10 21:16:15.610: INFO: Waiting up to 15m0s for pod "adopt-release-65t8h" in namespace "job-641" to be "adopted" Mar 10 21:16:15.648: INFO: Pod "adopt-release-65t8h": Phase="Running", Reason="", readiness=true. Elapsed: 37.674402ms Mar 10 21:16:17.652: INFO: Pod "adopt-release-65t8h": Phase="Running", Reason="", readiness=true. Elapsed: 2.04155584s Mar 10 21:16:17.652: INFO: Pod "adopt-release-65t8h" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 10 21:16:18.172: INFO: Successfully updated pod "adopt-release-65t8h" STEP: Checking that the Job releases the Pod Mar 10 21:16:18.172: INFO: Waiting up to 15m0s for pod "adopt-release-65t8h" in namespace "job-641" to be "released" Mar 10 21:16:18.174: INFO: Pod "adopt-release-65t8h": Phase="Running", Reason="", readiness=true. Elapsed: 2.552519ms Mar 10 21:16:20.179: INFO: Pod "adopt-release-65t8h": Phase="Running", Reason="", readiness=true. Elapsed: 2.006955039s Mar 10 21:16:20.179: INFO: Pod "adopt-release-65t8h" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:16:20.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-641" for this suite. • [SLOW TEST:7.195 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":31,"skipped":546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:16:20.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-hrw4 STEP: Creating a pod to test atomic-volume-subpath Mar 10 21:16:20.284: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-hrw4" in namespace "subpath-7893" to be "success or failure" Mar 10 21:16:20.311: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.887132ms Mar 10 21:16:22.315: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030454324s Mar 10 21:16:24.318: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Running", Reason="", readiness=true. Elapsed: 4.034079816s Mar 10 21:16:26.322: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Running", Reason="", readiness=true. Elapsed: 6.03747783s Mar 10 21:16:28.325: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Running", Reason="", readiness=true. Elapsed: 8.040749134s Mar 10 21:16:30.328: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Running", Reason="", readiness=true. Elapsed: 10.044076913s Mar 10 21:16:32.333: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Running", Reason="", readiness=true. Elapsed: 12.048165319s Mar 10 21:16:34.336: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Running", Reason="", readiness=true. Elapsed: 14.051938125s Mar 10 21:16:36.340: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Running", Reason="", readiness=true. Elapsed: 16.056012132s Mar 10 21:16:38.344: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Running", Reason="", readiness=true. Elapsed: 18.059360529s Mar 10 21:16:40.348: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Running", Reason="", readiness=true. Elapsed: 20.063383141s Mar 10 21:16:42.351: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Running", Reason="", readiness=true. Elapsed: 22.066264792s Mar 10 21:16:44.355: INFO: Pod "pod-subpath-test-projected-hrw4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070680095s STEP: Saw pod success Mar 10 21:16:44.355: INFO: Pod "pod-subpath-test-projected-hrw4" satisfied condition "success or failure" Mar 10 21:16:44.359: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-hrw4 container test-container-subpath-projected-hrw4: STEP: delete the pod Mar 10 21:16:44.411: INFO: Waiting for pod pod-subpath-test-projected-hrw4 to disappear Mar 10 21:16:44.424: INFO: Pod pod-subpath-test-projected-hrw4 no longer exists STEP: Deleting pod pod-subpath-test-projected-hrw4 Mar 10 21:16:44.424: INFO: Deleting pod "pod-subpath-test-projected-hrw4" in namespace "subpath-7893" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:16:44.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7893" for this suite. • [SLOW TEST:24.246 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":32,"skipped":578,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:16:44.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:16:48.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3007" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":579,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:16:48.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 10 21:16:48.619: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 10 21:16:55.682: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:16:55.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9784" for this suite. • [SLOW TEST:7.138 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":592,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:16:55.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:16:55.742: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 10 21:16:55.750: INFO: Number of nodes with available pods: 0 Mar 10 21:16:55.750: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 10 21:16:55.848: INFO: Number of nodes with available pods: 0 Mar 10 21:16:55.848: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:16:56.852: INFO: Number of nodes with available pods: 0 Mar 10 21:16:56.852: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:16:57.852: INFO: Number of nodes with available pods: 1 Mar 10 21:16:57.852: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 10 21:16:57.883: INFO: Number of nodes with available pods: 1 Mar 10 21:16:57.883: INFO: Number of running nodes: 0, number of available pods: 1 Mar 10 21:16:58.886: INFO: Number of nodes with available pods: 0 Mar 10 21:16:58.886: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 10 21:16:58.922: INFO: Number of nodes with available pods: 0 Mar 10 21:16:58.922: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:16:59.925: INFO: Number of nodes with available pods: 0 Mar 10 21:16:59.925: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:17:00.933: INFO: Number of nodes with available pods: 0 Mar 10 21:17:00.933: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:17:01.925: INFO: Number of nodes with available pods: 0 Mar 10 21:17:01.925: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:17:02.928: INFO: Number of nodes with available pods: 0 Mar 10 21:17:02.928: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:17:03.925: INFO: Number of nodes with available pods: 0 Mar 10 21:17:03.925: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:17:04.926: INFO: Number of nodes with available pods: 0 Mar 10 21:17:04.926: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:17:05.926: INFO: Number of nodes with available pods: 0 Mar 10 21:17:05.926: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:17:06.925: INFO: Number of nodes with available pods: 0 Mar 10 21:17:06.925: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:17:07.926: INFO: Number of nodes with available pods: 1 Mar 10 21:17:07.926: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4013, will wait for the garbage collector to delete the pods Mar 10 21:17:07.989: INFO: Deleting DaemonSet.extensions daemon-set took: 5.187177ms Mar 10 21:17:08.289: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.228644ms Mar 10 21:17:16.121: INFO: Number of nodes with available pods: 0 Mar 10 21:17:16.121: INFO: Number of running nodes: 0, number of available pods: 0 Mar 10 21:17:16.124: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4013/daemonsets","resourceVersion":"670062"},"items":null} Mar 10 21:17:16.127: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4013/pods","resourceVersion":"670062"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:17:16.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4013" for this suite. • [SLOW TEST:20.476 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":35,"skipped":593,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:17:16.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 10 21:17:16.266: INFO: Waiting up to 5m0s for pod "var-expansion-dd06d8ee-af61-49fa-bb11-c0d908bf91b6" in namespace "var-expansion-4064" to be "success or failure" Mar 10 21:17:16.270: INFO: Pod "var-expansion-dd06d8ee-af61-49fa-bb11-c0d908bf91b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122163ms Mar 10 21:17:18.275: INFO: Pod "var-expansion-dd06d8ee-af61-49fa-bb11-c0d908bf91b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009530537s STEP: Saw pod success Mar 10 21:17:18.275: INFO: Pod "var-expansion-dd06d8ee-af61-49fa-bb11-c0d908bf91b6" satisfied condition "success or failure" Mar 10 21:17:18.279: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-dd06d8ee-af61-49fa-bb11-c0d908bf91b6 container dapi-container: STEP: delete the pod Mar 10 21:17:18.314: INFO: Waiting for pod var-expansion-dd06d8ee-af61-49fa-bb11-c0d908bf91b6 to disappear Mar 10 21:17:18.330: INFO: Pod var-expansion-dd06d8ee-af61-49fa-bb11-c0d908bf91b6 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:17:18.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4064" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":611,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:17:18.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:17:18.434: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 30.200262ms) Mar 10 21:17:18.444: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 9.8496ms) Mar 10 21:17:18.447: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.547039ms) Mar 10 21:17:18.450: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.777258ms) Mar 10 21:17:18.452: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.379256ms) Mar 10 21:17:18.455: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.664209ms) Mar 10 21:17:18.458: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.931176ms) Mar 10 21:17:18.461: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.579822ms) Mar 10 21:17:18.464: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.808744ms) Mar 10 21:17:18.466: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.367839ms) Mar 10 21:17:18.486: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 20.170875ms) Mar 10 21:17:18.488: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.331184ms) Mar 10 21:17:18.491: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.901244ms) Mar 10 21:17:18.495: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.309799ms) Mar 10 21:17:18.497: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.280954ms) Mar 10 21:17:18.499: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.285581ms) Mar 10 21:17:18.502: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.291933ms) Mar 10 21:17:18.504: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.143429ms) Mar 10 21:17:18.506: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.406225ms) Mar 10 21:17:18.509: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.225547ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:17:18.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6235" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":37,"skipped":620,"failed":0} S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:17:18.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-d0eeef18-291c-4ba3-8281-60d585b95242 STEP: Creating configMap with name cm-test-opt-upd-e2e21d67-cca9-44fa-9570-a0d1ce878233 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d0eeef18-291c-4ba3-8281-60d585b95242 STEP: Updating configmap cm-test-opt-upd-e2e21d67-cca9-44fa-9570-a0d1ce878233 STEP: Creating configMap with name cm-test-opt-create-88d2e9f2-ac9d-40df-9be1-f52b1cfb168a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:17:24.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6549" for this suite. • [SLOW TEST:6.210 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":621,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:17:24.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-6fe743c3-b1ca-4dd8-9cbc-2fbec6262493 in namespace container-probe-1969 Mar 10 21:17:26.788: INFO: Started pod busybox-6fe743c3-b1ca-4dd8-9cbc-2fbec6262493 in namespace container-probe-1969 STEP: checking the pod's current state and verifying that restartCount is present Mar 10 21:17:26.791: INFO: Initial restart count of pod busybox-6fe743c3-b1ca-4dd8-9cbc-2fbec6262493 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:21:27.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1969" for this suite. • [SLOW TEST:242.947 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":631,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:21:27.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-9521ad21-3e97-4bf0-9851-ec839b399956 STEP: Creating a pod to test consume secrets Mar 10 21:21:27.749: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aa37ee06-76ff-4bce-b849-a5fd63f7ed9f" in namespace "projected-5563" to be "success or failure" Mar 10 21:21:27.754: INFO: Pod "pod-projected-secrets-aa37ee06-76ff-4bce-b849-a5fd63f7ed9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390084ms Mar 10 21:21:29.757: INFO: Pod "pod-projected-secrets-aa37ee06-76ff-4bce-b849-a5fd63f7ed9f": Phase="Running", Reason="", readiness=true. Elapsed: 2.007588476s Mar 10 21:21:31.761: INFO: Pod "pod-projected-secrets-aa37ee06-76ff-4bce-b849-a5fd63f7ed9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011847655s STEP: Saw pod success Mar 10 21:21:31.761: INFO: Pod "pod-projected-secrets-aa37ee06-76ff-4bce-b849-a5fd63f7ed9f" satisfied condition "success or failure" Mar 10 21:21:31.764: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-aa37ee06-76ff-4bce-b849-a5fd63f7ed9f container projected-secret-volume-test: STEP: delete the pod Mar 10 21:21:31.804: INFO: Waiting for pod pod-projected-secrets-aa37ee06-76ff-4bce-b849-a5fd63f7ed9f to disappear Mar 10 21:21:31.808: INFO: Pod pod-projected-secrets-aa37ee06-76ff-4bce-b849-a5fd63f7ed9f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:21:31.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5563" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":635,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:21:31.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:21:31.869: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 10 21:21:34.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5619 create -f -' Mar 10 21:21:36.619: INFO: stderr: "" Mar 10 21:21:36.619: INFO: stdout: "e2e-test-crd-publish-openapi-6907-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 10 21:21:36.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5619 delete e2e-test-crd-publish-openapi-6907-crds test-cr' Mar 10 21:21:36.733: INFO: stderr: "" Mar 10 21:21:36.733: INFO: stdout: "e2e-test-crd-publish-openapi-6907-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 10 21:21:36.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5619 apply -f -' Mar 10 21:21:37.039: INFO: stderr: "" Mar 10 21:21:37.039: INFO: stdout: "e2e-test-crd-publish-openapi-6907-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 10 21:21:37.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5619 delete e2e-test-crd-publish-openapi-6907-crds test-cr' Mar 10 21:21:37.157: INFO: stderr: "" Mar 10 21:21:37.157: INFO: stdout: "e2e-test-crd-publish-openapi-6907-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 10 21:21:37.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6907-crds' Mar 10 21:21:37.434: INFO: stderr: "" Mar 10 21:21:37.434: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6907-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:21:40.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5619" for this suite. • [SLOW TEST:8.533 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":41,"skipped":645,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:21:40.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 10 21:21:44.492: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 10 21:21:44.530: INFO: Pod pod-with-poststart-http-hook still exists Mar 10 21:21:46.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 10 21:21:46.535: INFO: Pod pod-with-poststart-http-hook still exists Mar 10 21:21:48.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 10 21:21:48.534: INFO: Pod pod-with-poststart-http-hook still exists Mar 10 21:21:50.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 10 21:21:50.548: INFO: Pod pod-with-poststart-http-hook still exists Mar 10 21:21:52.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 10 21:21:52.534: INFO: Pod pod-with-poststart-http-hook still exists Mar 10 21:21:54.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 10 21:21:54.534: INFO: Pod pod-with-poststart-http-hook still exists Mar 10 21:21:56.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 10 21:21:56.534: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:21:56.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1266" for this suite. • [SLOW TEST:16.195 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:21:56.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-a55dc80b-2fb2-48c0-bce9-c8c9bfa7cec5 STEP: Creating configMap with name cm-test-opt-upd-3dd949e3-8984-4048-a39d-f7c2a9cfd581 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a55dc80b-2fb2-48c0-bce9-c8c9bfa7cec5 STEP: Updating configmap cm-test-opt-upd-3dd949e3-8984-4048-a39d-f7c2a9cfd581 STEP: Creating configMap with name cm-test-opt-create-4baaeda8-803d-40a2-9387-528f1b2a6ab7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:23:27.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-788" for this suite. • [SLOW TEST:90.588 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":704,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:23:27.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 10 21:23:27.200: INFO: PodSpec: initContainers in spec.initContainers Mar 10 21:24:11.147: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1b808414-85fb-4dd0-b0cf-4d32563ac8ea", GenerateName:"", Namespace:"init-container-5232", SelfLink:"/api/v1/namespaces/init-container-5232/pods/pod-init-1b808414-85fb-4dd0-b0cf-4d32563ac8ea", UID:"8a72f8eb-179c-48c9-a248-4ff92aa1ca2d", ResourceVersion:"671562", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719472207, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"200748813"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6qdcp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004b04000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6qdcp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6qdcp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6qdcp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f04068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004262a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f040f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f04110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f04118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f0411c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472207, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472207, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472207, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472207, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.195", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.195"}}, StartTime:(*v1.Time)(0xc0000f3b60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00031ed90)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00031eee0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://740f7e84b73689e0df440bfd463a6979e4cbaca7961de7c48c1273e9b22863db", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0006ea960), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0000f3f80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002f0419f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:24:11.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5232" for this suite. • [SLOW TEST:44.039 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":44,"skipped":712,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:24:11.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:24:11.243: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9862001-3d62-4ac6-beab-1ea651722a52" in namespace "projected-2705" to be "success or failure" Mar 10 21:24:11.248: INFO: Pod "downwardapi-volume-f9862001-3d62-4ac6-beab-1ea651722a52": Phase="Pending", Reason="", readiness=false. Elapsed: 5.323268ms Mar 10 21:24:13.252: INFO: Pod "downwardapi-volume-f9862001-3d62-4ac6-beab-1ea651722a52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009249475s STEP: Saw pod success Mar 10 21:24:13.252: INFO: Pod "downwardapi-volume-f9862001-3d62-4ac6-beab-1ea651722a52" satisfied condition "success or failure" Mar 10 21:24:13.255: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f9862001-3d62-4ac6-beab-1ea651722a52 container client-container: STEP: delete the pod Mar 10 21:24:13.306: INFO: Waiting for pod downwardapi-volume-f9862001-3d62-4ac6-beab-1ea651722a52 to disappear Mar 10 21:24:13.313: INFO: Pod downwardapi-volume-f9862001-3d62-4ac6-beab-1ea651722a52 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:24:13.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2705" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":741,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:24:13.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:24:13.717: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:24:16.751: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:24:16.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3133-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:24:17.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8901" for this suite. STEP: Destroying namespace "webhook-8901-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":46,"skipped":749,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:24:18.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-a4ba1193-7e31-47cf-8789-263cbe62c9ec STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a4ba1193-7e31-47cf-8789-263cbe62c9ec STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:24:22.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1941" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":758,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:24:22.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 10 21:24:22.363: INFO: Waiting up to 5m0s for pod "downward-api-0e30fa83-5019-4000-a0cb-e6ea461de947" in namespace "downward-api-2352" to be "success or failure" Mar 10 21:24:22.383: INFO: Pod "downward-api-0e30fa83-5019-4000-a0cb-e6ea461de947": Phase="Pending", Reason="", readiness=false. Elapsed: 19.434653ms Mar 10 21:24:24.387: INFO: Pod "downward-api-0e30fa83-5019-4000-a0cb-e6ea461de947": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023263823s STEP: Saw pod success Mar 10 21:24:24.387: INFO: Pod "downward-api-0e30fa83-5019-4000-a0cb-e6ea461de947" satisfied condition "success or failure" Mar 10 21:24:24.390: INFO: Trying to get logs from node jerma-worker2 pod downward-api-0e30fa83-5019-4000-a0cb-e6ea461de947 container dapi-container: STEP: delete the pod Mar 10 21:24:24.408: INFO: Waiting for pod downward-api-0e30fa83-5019-4000-a0cb-e6ea461de947 to disappear Mar 10 21:24:24.423: INFO: Pod downward-api-0e30fa83-5019-4000-a0cb-e6ea461de947 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:24:24.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2352" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":812,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:24:24.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:24:24.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7962' Mar 10 21:24:25.490: INFO: stderr: "" Mar 10 21:24:25.490: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 10 21:24:25.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7962' Mar 10 21:24:25.829: INFO: stderr: "" Mar 10 21:24:25.829: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 10 21:24:26.834: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 21:24:26.834: INFO: Found 0 / 1 Mar 10 21:24:27.833: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 21:24:27.833: INFO: Found 1 / 1 Mar 10 21:24:27.833: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 10 21:24:27.837: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 21:24:27.837: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 10 21:24:27.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-l9rzw --namespace=kubectl-7962' Mar 10 21:24:27.977: INFO: stderr: "" Mar 10 21:24:27.977: INFO: stdout: "Name: agnhost-master-l9rzw\nNamespace: kubectl-7962\nPriority: 0\nNode: jerma-worker2/172.17.0.5\nStart Time: Tue, 10 Mar 2020 21:24:25 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.180\nIPs:\n IP: 10.244.1.180\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://1b20e3694fdb61776201dba4e93f5951238ffa18a22f8531821bb4d40f76b545\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 10 Mar 2020 21:24:26 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-hgq54 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-hgq54:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-hgq54\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-7962/agnhost-master-l9rzw to jerma-worker2\n Normal Pulled 1s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Mar 10 21:24:27.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7962' Mar 10 21:24:28.086: INFO: stderr: "" Mar 10 21:24:28.086: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7962\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-l9rzw\n" Mar 10 21:24:28.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7962' Mar 10 21:24:28.214: INFO: stderr: "" Mar 10 21:24:28.214: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7962\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.45.221\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.180:6379\nSession Affinity: None\nEvents: \n" Mar 10 21:24:28.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 10 21:24:28.301: INFO: stderr: "" Mar 10 21:24:28.301: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:47:04 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Tue, 10 Mar 2020 21:24:26 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 10 Mar 2020 21:19:41 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 10 Mar 2020 21:19:41 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 10 Mar 2020 21:19:41 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 10 Mar 2020 21:19:41 +0000 Sun, 08 Mar 2020 14:48:18 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: 3f4950fefd574d4aaa94513c5781e5d9\n System UUID: 58a385c4-2d08-428a-9405-5e6b12d5bd17\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-6n4ms 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d6h\n kube-system coredns-6955765f44-nlwfn 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d6h\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d6h\n kube-system kindnet-2glhp 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2d6h\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 2d6h\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 2d6h\n kube-system kube-proxy-zmch2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d6h\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 2d6h\n local-path-storage local-path-provisioner-85445b74d4-gpcbt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d6h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 10 21:24:28.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7962' Mar 10 21:24:28.371: INFO: stderr: "" Mar 10 21:24:28.371: INFO: stdout: "Name: kubectl-7962\nLabels: e2e-framework=kubectl\n e2e-run=2d7bcd85-e710-4684-905a-e4f1c05fcad0\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:24:28.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7962" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":49,"skipped":825,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:24:28.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 10 21:24:30.484: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:24:30.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3214" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":840,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:24:30.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7093 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7093 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7093 Mar 10 21:24:30.652: INFO: Found 0 stateful pods, waiting for 1 Mar 10 21:24:40.670: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 10 21:24:40.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7093 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 10 21:24:40.899: INFO: stderr: "I0310 21:24:40.799397 504 log.go:172] (0xc00094c6e0) (0xc000691e00) Create stream\nI0310 21:24:40.799440 504 log.go:172] (0xc00094c6e0) (0xc000691e00) Stream added, broadcasting: 1\nI0310 21:24:40.801749 504 log.go:172] (0xc00094c6e0) Reply frame received for 1\nI0310 21:24:40.801776 504 log.go:172] (0xc00094c6e0) (0xc000691ea0) Create stream\nI0310 21:24:40.801783 504 log.go:172] (0xc00094c6e0) (0xc000691ea0) Stream added, broadcasting: 3\nI0310 21:24:40.802626 504 log.go:172] (0xc00094c6e0) Reply frame received for 3\nI0310 21:24:40.802653 504 log.go:172] (0xc00094c6e0) (0xc000892000) Create stream\nI0310 21:24:40.802661 504 log.go:172] (0xc00094c6e0) (0xc000892000) Stream added, broadcasting: 5\nI0310 21:24:40.803373 504 log.go:172] (0xc00094c6e0) Reply frame received for 5\nI0310 21:24:40.865388 504 log.go:172] (0xc00094c6e0) Data frame received for 5\nI0310 21:24:40.865409 504 log.go:172] (0xc000892000) (5) Data frame handling\nI0310 21:24:40.865423 504 log.go:172] (0xc000892000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0310 21:24:40.893433 504 log.go:172] (0xc00094c6e0) Data frame received for 5\nI0310 21:24:40.893465 504 log.go:172] (0xc000892000) (5) Data frame handling\nI0310 21:24:40.893491 504 log.go:172] (0xc00094c6e0) Data frame received for 3\nI0310 21:24:40.893503 504 log.go:172] (0xc000691ea0) (3) Data frame handling\nI0310 21:24:40.893518 504 log.go:172] (0xc000691ea0) (3) Data frame sent\nI0310 21:24:40.893530 504 log.go:172] (0xc00094c6e0) Data frame received for 3\nI0310 21:24:40.893541 504 log.go:172] (0xc000691ea0) (3) Data frame handling\nI0310 21:24:40.895309 504 log.go:172] (0xc00094c6e0) Data frame received for 1\nI0310 21:24:40.895334 504 log.go:172] (0xc000691e00) (1) Data frame handling\nI0310 21:24:40.895347 504 log.go:172] (0xc000691e00) (1) Data frame sent\nI0310 21:24:40.895363 504 log.go:172] (0xc00094c6e0) (0xc000691e00) Stream removed, broadcasting: 1\nI0310 21:24:40.895390 504 log.go:172] (0xc00094c6e0) Go away received\nI0310 21:24:40.895775 504 log.go:172] (0xc00094c6e0) (0xc000691e00) Stream removed, broadcasting: 1\nI0310 21:24:40.895797 504 log.go:172] (0xc00094c6e0) (0xc000691ea0) Stream removed, broadcasting: 3\nI0310 21:24:40.895807 504 log.go:172] (0xc00094c6e0) (0xc000892000) Stream removed, broadcasting: 5\n" Mar 10 21:24:40.899: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 10 21:24:40.899: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 10 21:24:40.902: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 10 21:24:50.906: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 10 21:24:50.906: INFO: Waiting for statefulset status.replicas updated to 0 Mar 10 21:24:50.927: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999306s Mar 10 21:24:51.931: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988650688s Mar 10 21:24:52.935: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.984414409s Mar 10 21:24:53.938: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.980872068s Mar 10 21:24:54.941: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977412351s Mar 10 21:24:55.945: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.974370929s Mar 10 21:24:56.949: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.970659785s Mar 10 21:24:57.953: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.966868834s Mar 10 21:24:58.957: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.962665847s Mar 10 21:24:59.961: INFO: Verifying statefulset ss doesn't scale past 1 for another 958.63355ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7093 Mar 10 21:25:00.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7093 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 10 21:25:01.175: INFO: stderr: "I0310 21:25:01.103666 527 log.go:172] (0xc000a2c000) (0xc000aae0a0) Create stream\nI0310 21:25:01.103732 527 log.go:172] (0xc000a2c000) (0xc000aae0a0) Stream added, broadcasting: 1\nI0310 21:25:01.106225 527 log.go:172] (0xc000a2c000) Reply frame received for 1\nI0310 21:25:01.106258 527 log.go:172] (0xc000a2c000) (0xc0009f8000) Create stream\nI0310 21:25:01.106266 527 log.go:172] (0xc000a2c000) (0xc0009f8000) Stream added, broadcasting: 3\nI0310 21:25:01.107072 527 log.go:172] (0xc000a2c000) Reply frame received for 3\nI0310 21:25:01.107096 527 log.go:172] (0xc000a2c000) (0xc0009f80a0) Create stream\nI0310 21:25:01.107103 527 log.go:172] (0xc000a2c000) (0xc0009f80a0) Stream added, broadcasting: 5\nI0310 21:25:01.107824 527 log.go:172] (0xc000a2c000) Reply frame received for 5\nI0310 21:25:01.169333 527 log.go:172] (0xc000a2c000) Data frame received for 3\nI0310 21:25:01.169357 527 log.go:172] (0xc0009f8000) (3) Data frame handling\nI0310 21:25:01.169397 527 log.go:172] (0xc000a2c000) Data frame received for 5\nI0310 21:25:01.169424 527 log.go:172] (0xc0009f80a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0310 21:25:01.169438 527 log.go:172] (0xc0009f8000) (3) Data frame sent\nI0310 21:25:01.169462 527 log.go:172] (0xc000a2c000) Data frame received for 3\nI0310 21:25:01.169470 527 log.go:172] (0xc0009f8000) (3) Data frame handling\nI0310 21:25:01.169501 527 log.go:172] (0xc0009f80a0) (5) Data frame sent\nI0310 21:25:01.169526 527 log.go:172] (0xc000a2c000) Data frame received for 5\nI0310 21:25:01.169534 527 log.go:172] (0xc0009f80a0) (5) Data frame handling\nI0310 21:25:01.170580 527 log.go:172] (0xc000a2c000) Data frame received for 1\nI0310 21:25:01.170598 527 log.go:172] (0xc000aae0a0) (1) Data frame handling\nI0310 21:25:01.170611 527 log.go:172] (0xc000aae0a0) (1) Data frame sent\nI0310 21:25:01.170643 527 log.go:172] (0xc000a2c000) (0xc000aae0a0) Stream removed, broadcasting: 1\nI0310 21:25:01.170665 527 log.go:172] (0xc000a2c000) Go away received\nI0310 21:25:01.171392 527 log.go:172] (0xc000a2c000) (0xc000aae0a0) Stream removed, broadcasting: 1\nI0310 21:25:01.171426 527 log.go:172] (0xc000a2c000) (0xc0009f8000) Stream removed, broadcasting: 3\nI0310 21:25:01.171436 527 log.go:172] (0xc000a2c000) (0xc0009f80a0) Stream removed, broadcasting: 5\n" Mar 10 21:25:01.175: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 10 21:25:01.175: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 10 21:25:01.178: INFO: Found 1 stateful pods, waiting for 3 Mar 10 21:25:11.182: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 10 21:25:11.182: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 10 21:25:11.183: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 10 21:25:11.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7093 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 10 21:25:11.433: INFO: stderr: "I0310 21:25:11.344599 547 log.go:172] (0xc000ba1130) (0xc000b080a0) Create stream\nI0310 21:25:11.344640 547 log.go:172] (0xc000ba1130) (0xc000b080a0) Stream added, broadcasting: 1\nI0310 21:25:11.349323 547 log.go:172] (0xc000ba1130) Reply frame received for 1\nI0310 21:25:11.349379 547 log.go:172] (0xc000ba1130) (0xc00082da40) Create stream\nI0310 21:25:11.349394 547 log.go:172] (0xc000ba1130) (0xc00082da40) Stream added, broadcasting: 3\nI0310 21:25:11.354245 547 log.go:172] (0xc000ba1130) Reply frame received for 3\nI0310 21:25:11.354284 547 log.go:172] (0xc000ba1130) (0xc0006e8640) Create stream\nI0310 21:25:11.354295 547 log.go:172] (0xc000ba1130) (0xc0006e8640) Stream added, broadcasting: 5\nI0310 21:25:11.355125 547 log.go:172] (0xc000ba1130) Reply frame received for 5\nI0310 21:25:11.428553 547 log.go:172] (0xc000ba1130) Data frame received for 5\nI0310 21:25:11.428584 547 log.go:172] (0xc0006e8640) (5) Data frame handling\nI0310 21:25:11.428598 547 log.go:172] (0xc0006e8640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0310 21:25:11.428633 547 log.go:172] (0xc000ba1130) Data frame received for 5\nI0310 21:25:11.428642 547 log.go:172] (0xc0006e8640) (5) Data frame handling\nI0310 21:25:11.428661 547 log.go:172] (0xc000ba1130) Data frame received for 3\nI0310 21:25:11.428674 547 log.go:172] (0xc00082da40) (3) Data frame handling\nI0310 21:25:11.428685 547 log.go:172] (0xc00082da40) (3) Data frame sent\nI0310 21:25:11.428697 547 log.go:172] (0xc000ba1130) Data frame received for 3\nI0310 21:25:11.428706 547 log.go:172] (0xc00082da40) (3) Data frame handling\nI0310 21:25:11.429537 547 log.go:172] (0xc000ba1130) Data frame received for 1\nI0310 21:25:11.429574 547 log.go:172] (0xc000b080a0) (1) Data frame handling\nI0310 21:25:11.429587 547 log.go:172] (0xc000b080a0) (1) Data frame sent\nI0310 21:25:11.429599 547 log.go:172] (0xc000ba1130) (0xc000b080a0) Stream removed, broadcasting: 1\nI0310 21:25:11.429612 547 log.go:172] (0xc000ba1130) Go away received\nI0310 21:25:11.430052 547 log.go:172] (0xc000ba1130) (0xc000b080a0) Stream removed, broadcasting: 1\nI0310 21:25:11.430066 547 log.go:172] (0xc000ba1130) (0xc00082da40) Stream removed, broadcasting: 3\nI0310 21:25:11.430074 547 log.go:172] (0xc000ba1130) (0xc0006e8640) Stream removed, broadcasting: 5\n" Mar 10 21:25:11.433: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 10 21:25:11.433: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 10 21:25:11.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7093 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 10 21:25:11.611: INFO: stderr: "I0310 21:25:11.546888 569 log.go:172] (0xc000a914a0) (0xc000ad6820) Create stream\nI0310 21:25:11.546955 569 log.go:172] (0xc000a914a0) (0xc000ad6820) Stream added, broadcasting: 1\nI0310 21:25:11.549960 569 log.go:172] (0xc000a914a0) Reply frame received for 1\nI0310 21:25:11.549992 569 log.go:172] (0xc000a914a0) (0xc0006325a0) Create stream\nI0310 21:25:11.550003 569 log.go:172] (0xc000a914a0) (0xc0006325a0) Stream added, broadcasting: 3\nI0310 21:25:11.550639 569 log.go:172] (0xc000a914a0) Reply frame received for 3\nI0310 21:25:11.550663 569 log.go:172] (0xc000a914a0) (0xc000ad6000) Create stream\nI0310 21:25:11.550669 569 log.go:172] (0xc000a914a0) (0xc000ad6000) Stream added, broadcasting: 5\nI0310 21:25:11.551193 569 log.go:172] (0xc000a914a0) Reply frame received for 5\nI0310 21:25:11.591297 569 log.go:172] (0xc000a914a0) Data frame received for 5\nI0310 21:25:11.591315 569 log.go:172] (0xc000ad6000) (5) Data frame handling\nI0310 21:25:11.591326 569 log.go:172] (0xc000ad6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0310 21:25:11.606955 569 log.go:172] (0xc000a914a0) Data frame received for 5\nI0310 21:25:11.606979 569 log.go:172] (0xc000ad6000) (5) Data frame handling\nI0310 21:25:11.606992 569 log.go:172] (0xc000a914a0) Data frame received for 3\nI0310 21:25:11.606998 569 log.go:172] (0xc0006325a0) (3) Data frame handling\nI0310 21:25:11.607004 569 log.go:172] (0xc0006325a0) (3) Data frame sent\nI0310 21:25:11.607009 569 log.go:172] (0xc000a914a0) Data frame received for 3\nI0310 21:25:11.607018 569 log.go:172] (0xc0006325a0) (3) Data frame handling\nI0310 21:25:11.608337 569 log.go:172] (0xc000a914a0) Data frame received for 1\nI0310 21:25:11.608349 569 log.go:172] (0xc000ad6820) (1) Data frame handling\nI0310 21:25:11.608354 569 log.go:172] (0xc000ad6820) (1) Data frame sent\nI0310 21:25:11.608361 569 log.go:172] (0xc000a914a0) (0xc000ad6820) Stream removed, broadcasting: 1\nI0310 21:25:11.608392 569 log.go:172] (0xc000a914a0) Go away received\nI0310 21:25:11.608567 569 log.go:172] (0xc000a914a0) (0xc000ad6820) Stream removed, broadcasting: 1\nI0310 21:25:11.608577 569 log.go:172] (0xc000a914a0) (0xc0006325a0) Stream removed, broadcasting: 3\nI0310 21:25:11.608582 569 log.go:172] (0xc000a914a0) (0xc000ad6000) Stream removed, broadcasting: 5\n" Mar 10 21:25:11.611: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 10 21:25:11.611: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 10 21:25:11.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7093 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 10 21:25:11.815: INFO: stderr: "I0310 21:25:11.717075 589 log.go:172] (0xc000a6b550) (0xc000992820) Create stream\nI0310 21:25:11.717107 589 log.go:172] (0xc000a6b550) (0xc000992820) Stream added, broadcasting: 1\nI0310 21:25:11.720294 589 log.go:172] (0xc000a6b550) Reply frame received for 1\nI0310 21:25:11.720327 589 log.go:172] (0xc000a6b550) (0xc000634640) Create stream\nI0310 21:25:11.720337 589 log.go:172] (0xc000a6b550) (0xc000634640) Stream added, broadcasting: 3\nI0310 21:25:11.720879 589 log.go:172] (0xc000a6b550) Reply frame received for 3\nI0310 21:25:11.720900 589 log.go:172] (0xc000a6b550) (0xc000453400) Create stream\nI0310 21:25:11.720908 589 log.go:172] (0xc000a6b550) (0xc000453400) Stream added, broadcasting: 5\nI0310 21:25:11.721449 589 log.go:172] (0xc000a6b550) Reply frame received for 5\nI0310 21:25:11.789591 589 log.go:172] (0xc000a6b550) Data frame received for 5\nI0310 21:25:11.789617 589 log.go:172] (0xc000453400) (5) Data frame handling\nI0310 21:25:11.789635 589 log.go:172] (0xc000453400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0310 21:25:11.810226 589 log.go:172] (0xc000a6b550) Data frame received for 5\nI0310 21:25:11.810246 589 log.go:172] (0xc000453400) (5) Data frame handling\nI0310 21:25:11.810290 589 log.go:172] (0xc000a6b550) Data frame received for 3\nI0310 21:25:11.810316 589 log.go:172] (0xc000634640) (3) Data frame handling\nI0310 21:25:11.810332 589 log.go:172] (0xc000634640) (3) Data frame sent\nI0310 21:25:11.810347 589 log.go:172] (0xc000a6b550) Data frame received for 3\nI0310 21:25:11.810355 589 log.go:172] (0xc000634640) (3) Data frame handling\nI0310 21:25:11.811577 589 log.go:172] (0xc000a6b550) Data frame received for 1\nI0310 21:25:11.811645 589 log.go:172] (0xc000992820) (1) Data frame handling\nI0310 21:25:11.811668 589 log.go:172] (0xc000992820) (1) Data frame sent\nI0310 21:25:11.811680 589 log.go:172] (0xc000a6b550) (0xc000992820) Stream removed, broadcasting: 1\nI0310 21:25:11.811698 589 log.go:172] (0xc000a6b550) Go away received\nI0310 21:25:11.811945 589 log.go:172] (0xc000a6b550) (0xc000992820) Stream removed, broadcasting: 1\nI0310 21:25:11.811962 589 log.go:172] (0xc000a6b550) (0xc000634640) Stream removed, broadcasting: 3\nI0310 21:25:11.811969 589 log.go:172] (0xc000a6b550) (0xc000453400) Stream removed, broadcasting: 5\n" Mar 10 21:25:11.815: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 10 21:25:11.815: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 10 21:25:11.815: INFO: Waiting for statefulset status.replicas updated to 0 Mar 10 21:25:11.818: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 10 21:25:21.840: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 10 21:25:21.840: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 10 21:25:21.840: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 10 21:25:21.868: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999601s Mar 10 21:25:22.872: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993289954s Mar 10 21:25:23.874: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989090963s Mar 10 21:25:24.881: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986333435s Mar 10 21:25:25.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980221039s Mar 10 21:25:26.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97643669s Mar 10 21:25:27.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.971874056s Mar 10 21:25:28.902: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962737274s Mar 10 21:25:29.906: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958511487s Mar 10 21:25:30.912: INFO: Verifying statefulset ss doesn't scale past 3 for another 954.570444ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7093 Mar 10 21:25:31.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7093 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 10 21:25:32.134: INFO: stderr: "I0310 21:25:32.064411 609 log.go:172] (0xc000bc2000) (0xc0006fb9a0) Create stream\nI0310 21:25:32.064461 609 log.go:172] (0xc000bc2000) (0xc0006fb9a0) Stream added, broadcasting: 1\nI0310 21:25:32.070270 609 log.go:172] (0xc000bc2000) Reply frame received for 1\nI0310 21:25:32.070305 609 log.go:172] (0xc000bc2000) (0xc000a2e000) Create stream\nI0310 21:25:32.070314 609 log.go:172] (0xc000bc2000) (0xc000a2e000) Stream added, broadcasting: 3\nI0310 21:25:32.071975 609 log.go:172] (0xc000bc2000) Reply frame received for 3\nI0310 21:25:32.072008 609 log.go:172] (0xc000bc2000) (0xc000a2e0a0) Create stream\nI0310 21:25:32.072024 609 log.go:172] (0xc000bc2000) (0xc000a2e0a0) Stream added, broadcasting: 5\nI0310 21:25:32.072835 609 log.go:172] (0xc000bc2000) Reply frame received for 5\nI0310 21:25:32.128722 609 log.go:172] (0xc000bc2000) Data frame received for 5\nI0310 21:25:32.128757 609 log.go:172] (0xc000a2e0a0) (5) Data frame handling\nI0310 21:25:32.128770 609 log.go:172] (0xc000a2e0a0) (5) Data frame sent\nI0310 21:25:32.128779 609 log.go:172] (0xc000bc2000) Data frame received for 5\nI0310 21:25:32.128788 609 log.go:172] (0xc000a2e0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0310 21:25:32.128807 609 log.go:172] (0xc000bc2000) Data frame received for 3\nI0310 21:25:32.128817 609 log.go:172] (0xc000a2e000) (3) Data frame handling\nI0310 21:25:32.128823 609 log.go:172] (0xc000a2e000) (3) Data frame sent\nI0310 21:25:32.129026 609 log.go:172] (0xc000bc2000) Data frame received for 3\nI0310 21:25:32.129041 609 log.go:172] (0xc000a2e000) (3) Data frame handling\nI0310 21:25:32.130399 609 log.go:172] (0xc000bc2000) Data frame received for 1\nI0310 21:25:32.130413 609 log.go:172] (0xc0006fb9a0) (1) Data frame handling\nI0310 21:25:32.130419 609 log.go:172] (0xc0006fb9a0) (1) Data frame sent\nI0310 21:25:32.130430 609 log.go:172] (0xc000bc2000) (0xc0006fb9a0) Stream removed, broadcasting: 1\nI0310 21:25:32.130443 609 log.go:172] (0xc000bc2000) Go away received\nI0310 21:25:32.130802 609 log.go:172] (0xc000bc2000) (0xc0006fb9a0) Stream removed, broadcasting: 1\nI0310 21:25:32.130817 609 log.go:172] (0xc000bc2000) (0xc000a2e000) Stream removed, broadcasting: 3\nI0310 21:25:32.130824 609 log.go:172] (0xc000bc2000) (0xc000a2e0a0) Stream removed, broadcasting: 5\n" Mar 10 21:25:32.134: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 10 21:25:32.134: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 10 21:25:32.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7093 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 10 21:25:32.328: INFO: stderr: "I0310 21:25:32.254078 629 log.go:172] (0xc000ab5130) (0xc000a8a460) Create stream\nI0310 21:25:32.254143 629 log.go:172] (0xc000ab5130) (0xc000a8a460) Stream added, broadcasting: 1\nI0310 21:25:32.255367 629 log.go:172] (0xc000ab5130) Reply frame received for 1\nI0310 21:25:32.255416 629 log.go:172] (0xc000ab5130) (0xc000a8a500) Create stream\nI0310 21:25:32.255436 629 log.go:172] (0xc000ab5130) (0xc000a8a500) Stream added, broadcasting: 3\nI0310 21:25:32.256096 629 log.go:172] (0xc000ab5130) Reply frame received for 3\nI0310 21:25:32.256118 629 log.go:172] (0xc000ab5130) (0xc000a8a5a0) Create stream\nI0310 21:25:32.256125 629 log.go:172] (0xc000ab5130) (0xc000a8a5a0) Stream added, broadcasting: 5\nI0310 21:25:32.256791 629 log.go:172] (0xc000ab5130) Reply frame received for 5\nI0310 21:25:32.323635 629 log.go:172] (0xc000ab5130) Data frame received for 3\nI0310 21:25:32.323654 629 log.go:172] (0xc000a8a500) (3) Data frame handling\nI0310 21:25:32.323667 629 log.go:172] (0xc000a8a500) (3) Data frame sent\nI0310 21:25:32.323673 629 log.go:172] (0xc000ab5130) Data frame received for 3\nI0310 21:25:32.323679 629 log.go:172] (0xc000a8a500) (3) Data frame handling\nI0310 21:25:32.323819 629 log.go:172] (0xc000ab5130) Data frame received for 5\nI0310 21:25:32.323847 629 log.go:172] (0xc000a8a5a0) (5) Data frame handling\nI0310 21:25:32.323868 629 log.go:172] (0xc000a8a5a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0310 21:25:32.323900 629 log.go:172] (0xc000ab5130) Data frame received for 5\nI0310 21:25:32.323911 629 log.go:172] (0xc000a8a5a0) (5) Data frame handling\nI0310 21:25:32.324902 629 log.go:172] (0xc000ab5130) Data frame received for 1\nI0310 21:25:32.324915 629 log.go:172] (0xc000a8a460) (1) Data frame handling\nI0310 21:25:32.324926 629 log.go:172] (0xc000a8a460) (1) Data frame sent\nI0310 21:25:32.324933 629 log.go:172] (0xc000ab5130) (0xc000a8a460) Stream removed, broadcasting: 1\nI0310 21:25:32.324946 629 log.go:172] (0xc000ab5130) Go away received\nI0310 21:25:32.325323 629 log.go:172] (0xc000ab5130) (0xc000a8a460) Stream removed, broadcasting: 1\nI0310 21:25:32.325337 629 log.go:172] (0xc000ab5130) (0xc000a8a500) Stream removed, broadcasting: 3\nI0310 21:25:32.325344 629 log.go:172] (0xc000ab5130) (0xc000a8a5a0) Stream removed, broadcasting: 5\n" Mar 10 21:25:32.328: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 10 21:25:32.328: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 10 21:25:32.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7093 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 10 21:25:32.501: INFO: stderr: "I0310 21:25:32.437578 649 log.go:172] (0xc000a2a000) (0xc0005ca6e0) Create stream\nI0310 21:25:32.437616 649 log.go:172] (0xc000a2a000) (0xc0005ca6e0) Stream added, broadcasting: 1\nI0310 21:25:32.439424 649 log.go:172] (0xc000a2a000) Reply frame received for 1\nI0310 21:25:32.439448 649 log.go:172] (0xc000a2a000) (0xc0003e34a0) Create stream\nI0310 21:25:32.439457 649 log.go:172] (0xc000a2a000) (0xc0003e34a0) Stream added, broadcasting: 3\nI0310 21:25:32.440053 649 log.go:172] (0xc000a2a000) Reply frame received for 3\nI0310 21:25:32.440080 649 log.go:172] (0xc000a2a000) (0xc0009e2000) Create stream\nI0310 21:25:32.440089 649 log.go:172] (0xc000a2a000) (0xc0009e2000) Stream added, broadcasting: 5\nI0310 21:25:32.440689 649 log.go:172] (0xc000a2a000) Reply frame received for 5\nI0310 21:25:32.496540 649 log.go:172] (0xc000a2a000) Data frame received for 5\nI0310 21:25:32.496572 649 log.go:172] (0xc0009e2000) (5) Data frame handling\nI0310 21:25:32.496581 649 log.go:172] (0xc0009e2000) (5) Data frame sent\nI0310 21:25:32.496589 649 log.go:172] (0xc000a2a000) Data frame received for 5\nI0310 21:25:32.496594 649 log.go:172] (0xc0009e2000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0310 21:25:32.496610 649 log.go:172] (0xc000a2a000) Data frame received for 3\nI0310 21:25:32.496615 649 log.go:172] (0xc0003e34a0) (3) Data frame handling\nI0310 21:25:32.496621 649 log.go:172] (0xc0003e34a0) (3) Data frame sent\nI0310 21:25:32.496630 649 log.go:172] (0xc000a2a000) Data frame received for 3\nI0310 21:25:32.496636 649 log.go:172] (0xc0003e34a0) (3) Data frame handling\nI0310 21:25:32.497326 649 log.go:172] (0xc000a2a000) Data frame received for 1\nI0310 21:25:32.497342 649 log.go:172] (0xc0005ca6e0) (1) Data frame handling\nI0310 21:25:32.497379 649 log.go:172] (0xc0005ca6e0) (1) Data frame sent\nI0310 21:25:32.497396 649 log.go:172] (0xc000a2a000) (0xc0005ca6e0) Stream removed, broadcasting: 1\nI0310 21:25:32.497412 649 log.go:172] (0xc000a2a000) Go away received\nI0310 21:25:32.497693 649 log.go:172] (0xc000a2a000) (0xc0005ca6e0) Stream removed, broadcasting: 1\nI0310 21:25:32.497708 649 log.go:172] (0xc000a2a000) (0xc0003e34a0) Stream removed, broadcasting: 3\nI0310 21:25:32.497714 649 log.go:172] (0xc000a2a000) (0xc0009e2000) Stream removed, broadcasting: 5\n" Mar 10 21:25:32.501: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 10 21:25:32.501: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 10 21:25:32.501: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 10 21:26:12.513: INFO: Deleting all statefulset in ns statefulset-7093 Mar 10 21:26:12.533: INFO: Scaling statefulset ss to 0 Mar 10 21:26:12.541: INFO: Waiting for statefulset status.replicas updated to 0 Mar 10 21:26:12.547: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:26:12.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7093" for this suite. • [SLOW TEST:102.057 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":51,"skipped":855,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:26:12.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:26:14.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9260" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":52,"skipped":865,"failed":0} S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:26:14.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:26:14.996: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:26:17.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1432" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:26:17.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 10 21:26:17.174: INFO: Waiting up to 5m0s for pod "pod-709ec39a-9dfb-4a5f-9377-8d43d181c9b2" in namespace "emptydir-3798" to be "success or failure" Mar 10 21:26:17.194: INFO: Pod "pod-709ec39a-9dfb-4a5f-9377-8d43d181c9b2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.208572ms Mar 10 21:26:19.198: INFO: Pod "pod-709ec39a-9dfb-4a5f-9377-8d43d181c9b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024053183s Mar 10 21:26:21.201: INFO: Pod "pod-709ec39a-9dfb-4a5f-9377-8d43d181c9b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027358293s STEP: Saw pod success Mar 10 21:26:21.201: INFO: Pod "pod-709ec39a-9dfb-4a5f-9377-8d43d181c9b2" satisfied condition "success or failure" Mar 10 21:26:21.204: INFO: Trying to get logs from node jerma-worker2 pod pod-709ec39a-9dfb-4a5f-9377-8d43d181c9b2 container test-container: STEP: delete the pod Mar 10 21:26:21.232: INFO: Waiting for pod pod-709ec39a-9dfb-4a5f-9377-8d43d181c9b2 to disappear Mar 10 21:26:21.248: INFO: Pod pod-709ec39a-9dfb-4a5f-9377-8d43d181c9b2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:26:21.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3798" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":892,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:26:21.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444 STEP: creating an pod Mar 10 21:26:21.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-3494 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 10 21:26:21.400: INFO: stderr: "" Mar 10 21:26:21.401: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 10 21:26:21.401: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 10 21:26:21.401: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3494" to be "running and ready, or succeeded" Mar 10 21:26:21.418: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 17.682595ms Mar 10 21:26:23.422: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.02125696s Mar 10 21:26:23.422: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 10 21:26:23.422: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 10 21:26:23.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3494' Mar 10 21:26:23.552: INFO: stderr: "" Mar 10 21:26:23.552: INFO: stdout: "I0310 21:26:22.570683 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/464 326\nI0310 21:26:22.770833 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/lkk 564\nI0310 21:26:22.970839 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/qmtm 589\nI0310 21:26:23.170930 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/2sd 401\nI0310 21:26:23.370871 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/mrnv 555\n" STEP: limiting log lines Mar 10 21:26:23.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3494 --tail=1' Mar 10 21:26:23.665: INFO: stderr: "" Mar 10 21:26:23.665: INFO: stdout: "I0310 21:26:23.570833 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/kgc 300\n" Mar 10 21:26:23.665: INFO: got output "I0310 21:26:23.570833 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/kgc 300\n" STEP: limiting log bytes Mar 10 21:26:23.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3494 --limit-bytes=1' Mar 10 21:26:23.759: INFO: stderr: "" Mar 10 21:26:23.759: INFO: stdout: "I" Mar 10 21:26:23.759: INFO: got output "I" STEP: exposing timestamps Mar 10 21:26:23.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3494 --tail=1 --timestamps' Mar 10 21:26:23.860: INFO: stderr: "" Mar 10 21:26:23.860: INFO: stdout: "2020-03-10T21:26:23.770952837Z I0310 21:26:23.770836 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/kcf6 439\n" Mar 10 21:26:23.860: INFO: got output "2020-03-10T21:26:23.770952837Z I0310 21:26:23.770836 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/kcf6 439\n" STEP: restricting to a time range Mar 10 21:26:26.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3494 --since=1s' Mar 10 21:26:26.466: INFO: stderr: "" Mar 10 21:26:26.466: INFO: stdout: "I0310 21:26:25.570884 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/v77z 259\nI0310 21:26:25.770890 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/hsb 586\nI0310 21:26:25.970900 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/9xt 246\nI0310 21:26:26.170926 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/fnl 437\nI0310 21:26:26.370835 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/4pbd 407\n" Mar 10 21:26:26.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3494 --since=24h' Mar 10 21:26:26.562: INFO: stderr: "" Mar 10 21:26:26.562: INFO: stdout: "I0310 21:26:22.570683 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/464 326\nI0310 21:26:22.770833 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/lkk 564\nI0310 21:26:22.970839 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/qmtm 589\nI0310 21:26:23.170930 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/2sd 401\nI0310 21:26:23.370871 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/mrnv 555\nI0310 21:26:23.570833 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/kgc 300\nI0310 21:26:23.770836 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/kcf6 439\nI0310 21:26:23.970852 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/c9x9 321\nI0310 21:26:24.170909 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/brq7 379\nI0310 21:26:24.370884 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/8v8 274\nI0310 21:26:24.570872 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/hqp4 572\nI0310 21:26:24.770874 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/2w45 570\nI0310 21:26:24.970848 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/249 421\nI0310 21:26:25.170871 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/mcf 359\nI0310 21:26:25.370841 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/pwd 265\nI0310 21:26:25.570884 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/v77z 259\nI0310 21:26:25.770890 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/hsb 586\nI0310 21:26:25.970900 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/9xt 246\nI0310 21:26:26.170926 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/fnl 437\nI0310 21:26:26.370835 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/4pbd 407\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 Mar 10 21:26:26.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3494' Mar 10 21:26:28.385: INFO: stderr: "" Mar 10 21:26:28.385: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:26:28.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3494" for this suite. • [SLOW TEST:7.144 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":55,"skipped":909,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:26:28.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 10 21:26:28.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6204' Mar 10 21:26:28.963: INFO: stderr: "" Mar 10 21:26:28.963: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 10 21:26:29.967: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 21:26:29.967: INFO: Found 0 / 1 Mar 10 21:26:30.968: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 21:26:30.968: INFO: Found 0 / 1 Mar 10 21:26:31.968: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 21:26:31.968: INFO: Found 1 / 1 Mar 10 21:26:31.968: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 10 21:26:31.971: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 21:26:31.971: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 10 21:26:31.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-2fkct --namespace=kubectl-6204 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 10 21:26:32.098: INFO: stderr: "" Mar 10 21:26:32.098: INFO: stdout: "pod/agnhost-master-2fkct patched\n" STEP: checking annotations Mar 10 21:26:32.103: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 21:26:32.103: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:26:32.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6204" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":56,"skipped":933,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:26:32.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 10 21:26:38.233: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 10 21:26:38.259: INFO: Pod pod-with-prestop-exec-hook still exists Mar 10 21:26:40.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 10 21:26:40.264: INFO: Pod pod-with-prestop-exec-hook still exists Mar 10 21:26:42.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 10 21:26:42.263: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:26:42.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7205" for this suite. • [SLOW TEST:10.174 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":936,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:26:42.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-cfc92744-f90d-408a-8242-01af44164e25 STEP: Creating a pod to test consume configMaps Mar 10 21:26:42.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6b9e163-44a5-4b5a-a75e-4d42d4205e7c" in namespace "configmap-4035" to be "success or failure" Mar 10 21:26:42.371: INFO: Pod "pod-configmaps-c6b9e163-44a5-4b5a-a75e-4d42d4205e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.670054ms Mar 10 21:26:44.376: INFO: Pod "pod-configmaps-c6b9e163-44a5-4b5a-a75e-4d42d4205e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027805761s Mar 10 21:26:46.379: INFO: Pod "pod-configmaps-c6b9e163-44a5-4b5a-a75e-4d42d4205e7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031574272s STEP: Saw pod success Mar 10 21:26:46.379: INFO: Pod "pod-configmaps-c6b9e163-44a5-4b5a-a75e-4d42d4205e7c" satisfied condition "success or failure" Mar 10 21:26:46.382: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c6b9e163-44a5-4b5a-a75e-4d42d4205e7c container configmap-volume-test: STEP: delete the pod Mar 10 21:26:46.400: INFO: Waiting for pod pod-configmaps-c6b9e163-44a5-4b5a-a75e-4d42d4205e7c to disappear Mar 10 21:26:46.405: INFO: Pod pod-configmaps-c6b9e163-44a5-4b5a-a75e-4d42d4205e7c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:26:46.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4035" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":941,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:26:46.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-n8pm STEP: Creating a pod to test atomic-volume-subpath Mar 10 21:26:46.526: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-n8pm" in namespace "subpath-9634" to be "success or failure" Mar 10 21:26:46.531: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.633074ms Mar 10 21:26:48.534: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Running", Reason="", readiness=true. Elapsed: 2.007767412s Mar 10 21:26:50.537: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Running", Reason="", readiness=true. Elapsed: 4.01108764s Mar 10 21:26:52.541: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Running", Reason="", readiness=true. Elapsed: 6.014828973s Mar 10 21:26:54.545: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Running", Reason="", readiness=true. Elapsed: 8.018257591s Mar 10 21:26:56.551: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Running", Reason="", readiness=true. Elapsed: 10.024838328s Mar 10 21:26:58.554: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Running", Reason="", readiness=true. Elapsed: 12.027470288s Mar 10 21:27:00.557: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Running", Reason="", readiness=true. Elapsed: 14.031120441s Mar 10 21:27:02.561: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Running", Reason="", readiness=true. Elapsed: 16.034772163s Mar 10 21:27:04.565: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Running", Reason="", readiness=true. Elapsed: 18.038514666s Mar 10 21:27:06.575: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Running", Reason="", readiness=true. Elapsed: 20.048449549s Mar 10 21:27:08.578: INFO: Pod "pod-subpath-test-configmap-n8pm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051601516s STEP: Saw pod success Mar 10 21:27:08.578: INFO: Pod "pod-subpath-test-configmap-n8pm" satisfied condition "success or failure" Mar 10 21:27:08.580: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-n8pm container test-container-subpath-configmap-n8pm: STEP: delete the pod Mar 10 21:27:08.633: INFO: Waiting for pod pod-subpath-test-configmap-n8pm to disappear Mar 10 21:27:08.639: INFO: Pod pod-subpath-test-configmap-n8pm no longer exists STEP: Deleting pod pod-subpath-test-configmap-n8pm Mar 10 21:27:08.639: INFO: Deleting pod "pod-subpath-test-configmap-n8pm" in namespace "subpath-9634" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:27:08.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9634" for this suite. • [SLOW TEST:22.234 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":59,"skipped":949,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:27:08.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-fb4c8e7a-5daf-45e0-a9eb-5b4ce40226cb in namespace container-probe-1204 Mar 10 21:27:10.743: INFO: Started pod liveness-fb4c8e7a-5daf-45e0-a9eb-5b4ce40226cb in namespace container-probe-1204 STEP: checking the pod's current state and verifying that restartCount is present Mar 10 21:27:10.746: INFO: Initial restart count of pod liveness-fb4c8e7a-5daf-45e0-a9eb-5b4ce40226cb is 0 Mar 10 21:27:26.799: INFO: Restart count of pod container-probe-1204/liveness-fb4c8e7a-5daf-45e0-a9eb-5b4ce40226cb is now 1 (16.053648492s elapsed) Mar 10 21:27:46.917: INFO: Restart count of pod container-probe-1204/liveness-fb4c8e7a-5daf-45e0-a9eb-5b4ce40226cb is now 2 (36.171729405s elapsed) Mar 10 21:28:06.953: INFO: Restart count of pod container-probe-1204/liveness-fb4c8e7a-5daf-45e0-a9eb-5b4ce40226cb is now 3 (56.206895392s elapsed) Mar 10 21:28:26.989: INFO: Restart count of pod container-probe-1204/liveness-fb4c8e7a-5daf-45e0-a9eb-5b4ce40226cb is now 4 (1m16.242962122s elapsed) Mar 10 21:29:29.147: INFO: Restart count of pod container-probe-1204/liveness-fb4c8e7a-5daf-45e0-a9eb-5b4ce40226cb is now 5 (2m18.401409568s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:29:29.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1204" for this suite. • [SLOW TEST:140.541 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":968,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:29:29.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 10 21:29:29.315: INFO: Waiting up to 5m0s for pod "var-expansion-b3f1af43-75fe-482e-941d-473b82f14e8d" in namespace "var-expansion-2925" to be "success or failure" Mar 10 21:29:29.360: INFO: Pod "var-expansion-b3f1af43-75fe-482e-941d-473b82f14e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 45.551607ms Mar 10 21:29:31.364: INFO: Pod "var-expansion-b3f1af43-75fe-482e-941d-473b82f14e8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.04871681s STEP: Saw pod success Mar 10 21:29:31.364: INFO: Pod "var-expansion-b3f1af43-75fe-482e-941d-473b82f14e8d" satisfied condition "success or failure" Mar 10 21:29:31.366: INFO: Trying to get logs from node jerma-worker pod var-expansion-b3f1af43-75fe-482e-941d-473b82f14e8d container dapi-container: STEP: delete the pod Mar 10 21:29:31.408: INFO: Waiting for pod var-expansion-b3f1af43-75fe-482e-941d-473b82f14e8d to disappear Mar 10 21:29:31.417: INFO: Pod var-expansion-b3f1af43-75fe-482e-941d-473b82f14e8d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:29:31.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2925" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":972,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:29:31.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 10 21:29:31.525: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:31.550: INFO: Number of nodes with available pods: 0 Mar 10 21:29:31.550: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:29:32.555: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:32.558: INFO: Number of nodes with available pods: 0 Mar 10 21:29:32.558: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:29:33.555: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:33.558: INFO: Number of nodes with available pods: 0 Mar 10 21:29:33.558: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:29:34.559: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:34.561: INFO: Number of nodes with available pods: 2 Mar 10 21:29:34.561: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 10 21:29:34.580: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:34.585: INFO: Number of nodes with available pods: 1 Mar 10 21:29:34.585: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:35.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:35.593: INFO: Number of nodes with available pods: 1 Mar 10 21:29:35.593: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:36.592: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:36.594: INFO: Number of nodes with available pods: 1 Mar 10 21:29:36.594: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:37.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:37.593: INFO: Number of nodes with available pods: 1 Mar 10 21:29:37.593: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:38.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:38.593: INFO: Number of nodes with available pods: 1 Mar 10 21:29:38.593: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:39.607: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:39.610: INFO: Number of nodes with available pods: 1 Mar 10 21:29:39.610: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:40.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:40.594: INFO: Number of nodes with available pods: 1 Mar 10 21:29:40.594: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:41.589: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:41.592: INFO: Number of nodes with available pods: 1 Mar 10 21:29:41.592: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:42.598: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:42.601: INFO: Number of nodes with available pods: 1 Mar 10 21:29:42.601: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:43.607: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:43.610: INFO: Number of nodes with available pods: 1 Mar 10 21:29:43.610: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:44.589: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:44.592: INFO: Number of nodes with available pods: 1 Mar 10 21:29:44.592: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:45.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:45.593: INFO: Number of nodes with available pods: 1 Mar 10 21:29:45.593: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:46.608: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:46.610: INFO: Number of nodes with available pods: 1 Mar 10 21:29:46.610: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:47.589: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:47.592: INFO: Number of nodes with available pods: 1 Mar 10 21:29:47.592: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 21:29:48.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:29:48.593: INFO: Number of nodes with available pods: 2 Mar 10 21:29:48.593: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3044, will wait for the garbage collector to delete the pods Mar 10 21:29:48.655: INFO: Deleting DaemonSet.extensions daemon-set took: 6.329939ms Mar 10 21:29:48.955: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.293022ms Mar 10 21:29:56.057: INFO: Number of nodes with available pods: 0 Mar 10 21:29:56.057: INFO: Number of running nodes: 0, number of available pods: 0 Mar 10 21:29:56.063: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3044/daemonsets","resourceVersion":"673370"},"items":null} Mar 10 21:29:56.065: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3044/pods","resourceVersion":"673370"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:29:56.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3044" for this suite. • [SLOW TEST:24.656 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":62,"skipped":974,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:29:56.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8404 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 10 21:29:56.240: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 10 21:30:12.333: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.208 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8404 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:30:12.333: INFO: >>> kubeConfig: /root/.kube/config I0310 21:30:12.358974 6 log.go:172] (0xc002b08f20) (0xc0002d54a0) Create stream I0310 21:30:12.358998 6 log.go:172] (0xc002b08f20) (0xc0002d54a0) Stream added, broadcasting: 1 I0310 21:30:12.360945 6 log.go:172] (0xc002b08f20) Reply frame received for 1 I0310 21:30:12.360985 6 log.go:172] (0xc002b08f20) (0xc0002d55e0) Create stream I0310 21:30:12.360997 6 log.go:172] (0xc002b08f20) (0xc0002d55e0) Stream added, broadcasting: 3 I0310 21:30:12.361926 6 log.go:172] (0xc002b08f20) Reply frame received for 3 I0310 21:30:12.361948 6 log.go:172] (0xc002b08f20) (0xc0002d5720) Create stream I0310 21:30:12.361958 6 log.go:172] (0xc002b08f20) (0xc0002d5720) Stream added, broadcasting: 5 I0310 21:30:12.362907 6 log.go:172] (0xc002b08f20) Reply frame received for 5 I0310 21:30:13.425926 6 log.go:172] (0xc002b08f20) Data frame received for 3 I0310 21:30:13.426012 6 log.go:172] (0xc0002d55e0) (3) Data frame handling I0310 21:30:13.426044 6 log.go:172] (0xc0002d55e0) (3) Data frame sent I0310 21:30:13.426068 6 log.go:172] (0xc002b08f20) Data frame received for 3 I0310 21:30:13.426086 6 log.go:172] (0xc0002d55e0) (3) Data frame handling I0310 21:30:13.426345 6 log.go:172] (0xc002b08f20) Data frame received for 5 I0310 21:30:13.426379 6 log.go:172] (0xc0002d5720) (5) Data frame handling I0310 21:30:13.428243 6 log.go:172] (0xc002b08f20) Data frame received for 1 I0310 21:30:13.428275 6 log.go:172] (0xc0002d54a0) (1) Data frame handling I0310 21:30:13.428308 6 log.go:172] (0xc0002d54a0) (1) Data frame sent I0310 21:30:13.428329 6 log.go:172] (0xc002b08f20) (0xc0002d54a0) Stream removed, broadcasting: 1 I0310 21:30:13.428350 6 log.go:172] (0xc002b08f20) Go away received I0310 21:30:13.428643 6 log.go:172] (0xc002b08f20) (0xc0002d54a0) Stream removed, broadcasting: 1 I0310 21:30:13.428675 6 log.go:172] (0xc002b08f20) (0xc0002d55e0) Stream removed, broadcasting: 3 I0310 21:30:13.428697 6 log.go:172] (0xc002b08f20) (0xc0002d5720) Stream removed, broadcasting: 5 Mar 10 21:30:13.428: INFO: Found all expected endpoints: [netserver-0] Mar 10 21:30:13.432: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.188 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8404 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:30:13.432: INFO: >>> kubeConfig: /root/.kube/config I0310 21:30:13.462810 6 log.go:172] (0xc0022f1340) (0xc000436d20) Create stream I0310 21:30:13.462834 6 log.go:172] (0xc0022f1340) (0xc000436d20) Stream added, broadcasting: 1 I0310 21:30:13.465150 6 log.go:172] (0xc0022f1340) Reply frame received for 1 I0310 21:30:13.465186 6 log.go:172] (0xc0022f1340) (0xc000915ae0) Create stream I0310 21:30:13.465199 6 log.go:172] (0xc0022f1340) (0xc000915ae0) Stream added, broadcasting: 3 I0310 21:30:13.466309 6 log.go:172] (0xc0022f1340) Reply frame received for 3 I0310 21:30:13.466346 6 log.go:172] (0xc0022f1340) (0xc0002d5860) Create stream I0310 21:30:13.466357 6 log.go:172] (0xc0022f1340) (0xc0002d5860) Stream added, broadcasting: 5 I0310 21:30:13.467504 6 log.go:172] (0xc0022f1340) Reply frame received for 5 I0310 21:30:14.522089 6 log.go:172] (0xc0022f1340) Data frame received for 5 I0310 21:30:14.522157 6 log.go:172] (0xc0002d5860) (5) Data frame handling I0310 21:30:14.522181 6 log.go:172] (0xc0022f1340) Data frame received for 3 I0310 21:30:14.522194 6 log.go:172] (0xc000915ae0) (3) Data frame handling I0310 21:30:14.522218 6 log.go:172] (0xc000915ae0) (3) Data frame sent I0310 21:30:14.522225 6 log.go:172] (0xc0022f1340) Data frame received for 3 I0310 21:30:14.522231 6 log.go:172] (0xc000915ae0) (3) Data frame handling I0310 21:30:14.524903 6 log.go:172] (0xc0022f1340) Data frame received for 1 I0310 21:30:14.524938 6 log.go:172] (0xc000436d20) (1) Data frame handling I0310 21:30:14.524982 6 log.go:172] (0xc000436d20) (1) Data frame sent I0310 21:30:14.525010 6 log.go:172] (0xc0022f1340) (0xc000436d20) Stream removed, broadcasting: 1 I0310 21:30:14.525042 6 log.go:172] (0xc0022f1340) Go away received I0310 21:30:14.525148 6 log.go:172] (0xc0022f1340) (0xc000436d20) Stream removed, broadcasting: 1 I0310 21:30:14.525177 6 log.go:172] (0xc0022f1340) (0xc000915ae0) Stream removed, broadcasting: 3 I0310 21:30:14.525194 6 log.go:172] (0xc0022f1340) (0xc0002d5860) Stream removed, broadcasting: 5 Mar 10 21:30:14.525: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:30:14.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8404" for this suite. • [SLOW TEST:18.453 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":992,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:30:14.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 10 21:30:20.660: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 10 21:30:20.685: INFO: Pod pod-with-prestop-http-hook still exists Mar 10 21:30:22.685: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 10 21:30:22.688: INFO: Pod pod-with-prestop-http-hook still exists Mar 10 21:30:24.685: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 10 21:30:24.689: INFO: Pod pod-with-prestop-http-hook still exists Mar 10 21:30:26.685: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 10 21:30:26.689: INFO: Pod pod-with-prestop-http-hook still exists Mar 10 21:30:28.685: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 10 21:30:28.688: INFO: Pod pod-with-prestop-http-hook still exists Mar 10 21:30:30.685: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 10 21:30:30.688: INFO: Pod pod-with-prestop-http-hook still exists Mar 10 21:30:32.685: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 10 21:30:32.689: INFO: Pod pod-with-prestop-http-hook still exists Mar 10 21:30:34.685: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 10 21:30:34.688: INFO: Pod pod-with-prestop-http-hook still exists Mar 10 21:30:36.685: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 10 21:30:36.688: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:30:36.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7415" for this suite. • [SLOW TEST:22.208 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1037,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:30:36.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:30:36.792: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 10 21:30:38.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5094 create -f -' Mar 10 21:30:40.883: INFO: stderr: "" Mar 10 21:30:40.883: INFO: stdout: "e2e-test-crd-publish-openapi-9619-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 10 21:30:40.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5094 delete e2e-test-crd-publish-openapi-9619-crds test-foo' Mar 10 21:30:41.003: INFO: stderr: "" Mar 10 21:30:41.003: INFO: stdout: "e2e-test-crd-publish-openapi-9619-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 10 21:30:41.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5094 apply -f -' Mar 10 21:30:41.253: INFO: stderr: "" Mar 10 21:30:41.253: INFO: stdout: "e2e-test-crd-publish-openapi-9619-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 10 21:30:41.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5094 delete e2e-test-crd-publish-openapi-9619-crds test-foo' Mar 10 21:30:41.337: INFO: stderr: "" Mar 10 21:30:41.337: INFO: stdout: "e2e-test-crd-publish-openapi-9619-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 10 21:30:41.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5094 create -f -' Mar 10 21:30:41.532: INFO: rc: 1 Mar 10 21:30:41.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5094 apply -f -' Mar 10 21:30:41.748: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 10 21:30:41.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5094 create -f -' Mar 10 21:30:41.992: INFO: rc: 1 Mar 10 21:30:41.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5094 apply -f -' Mar 10 21:30:42.231: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 10 21:30:42.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9619-crds' Mar 10 21:30:42.435: INFO: stderr: "" Mar 10 21:30:42.435: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9619-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 10 21:30:42.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9619-crds.metadata' Mar 10 21:30:42.679: INFO: stderr: "" Mar 10 21:30:42.680: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9619-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 10 21:30:42.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9619-crds.spec' Mar 10 21:30:42.922: INFO: stderr: "" Mar 10 21:30:42.922: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9619-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 10 21:30:42.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9619-crds.spec.bars' Mar 10 21:30:43.156: INFO: stderr: "" Mar 10 21:30:43.156: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9619-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 10 21:30:43.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9619-crds.spec.bars2' Mar 10 21:30:43.393: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:30:45.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5094" for this suite. • [SLOW TEST:8.625 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":65,"skipped":1046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:30:45.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 10 21:30:45.468: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3897 /api/v1/namespaces/watch-3897/configmaps/e2e-watch-test-resource-version b00cb066-f02f-4639-8fc5-2029ad693c07 673667 0 2020-03-10 21:30:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 10 21:30:45.468: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3897 /api/v1/namespaces/watch-3897/configmaps/e2e-watch-test-resource-version b00cb066-f02f-4639-8fc5-2029ad693c07 673668 0 2020-03-10 21:30:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:30:45.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3897" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":66,"skipped":1081,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:30:45.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 10 21:30:45.528: INFO: Waiting up to 5m0s for pod "pod-e8d92bef-2e4a-4edf-bafe-76fafebcec5b" in namespace "emptydir-4922" to be "success or failure" Mar 10 21:30:45.549: INFO: Pod "pod-e8d92bef-2e4a-4edf-bafe-76fafebcec5b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.344366ms Mar 10 21:30:47.552: INFO: Pod "pod-e8d92bef-2e4a-4edf-bafe-76fafebcec5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024510267s STEP: Saw pod success Mar 10 21:30:47.552: INFO: Pod "pod-e8d92bef-2e4a-4edf-bafe-76fafebcec5b" satisfied condition "success or failure" Mar 10 21:30:47.555: INFO: Trying to get logs from node jerma-worker2 pod pod-e8d92bef-2e4a-4edf-bafe-76fafebcec5b container test-container: STEP: delete the pod Mar 10 21:30:47.575: INFO: Waiting for pod pod-e8d92bef-2e4a-4edf-bafe-76fafebcec5b to disappear Mar 10 21:30:47.603: INFO: Pod pod-e8d92bef-2e4a-4edf-bafe-76fafebcec5b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:30:47.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4922" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1173,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:30:47.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 10 21:30:49.697: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:30:49.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4071" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:30:49.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-3336 STEP: creating replication controller nodeport-test in namespace services-3336 I0310 21:30:49.928069 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3336, replica count: 2 I0310 21:30:52.978528 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 10 21:30:52.978: INFO: Creating new exec pod Mar 10 21:30:56.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3336 execpod7sfsd -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 10 21:30:56.271: INFO: stderr: "I0310 21:30:56.209879 1153 log.go:172] (0xc0009960b0) (0xc0009a6000) Create stream\nI0310 21:30:56.209937 1153 log.go:172] (0xc0009960b0) (0xc0009a6000) Stream added, broadcasting: 1\nI0310 21:30:56.212446 1153 log.go:172] (0xc0009960b0) Reply frame received for 1\nI0310 21:30:56.212488 1153 log.go:172] (0xc0009960b0) (0xc0009dc000) Create stream\nI0310 21:30:56.212499 1153 log.go:172] (0xc0009960b0) (0xc0009dc000) Stream added, broadcasting: 3\nI0310 21:30:56.213353 1153 log.go:172] (0xc0009960b0) Reply frame received for 3\nI0310 21:30:56.213380 1153 log.go:172] (0xc0009960b0) (0xc0009dc0a0) Create stream\nI0310 21:30:56.213387 1153 log.go:172] (0xc0009960b0) (0xc0009dc0a0) Stream added, broadcasting: 5\nI0310 21:30:56.214252 1153 log.go:172] (0xc0009960b0) Reply frame received for 5\nI0310 21:30:56.264515 1153 log.go:172] (0xc0009960b0) Data frame received for 5\nI0310 21:30:56.264547 1153 log.go:172] (0xc0009dc0a0) (5) Data frame handling\nI0310 21:30:56.264569 1153 log.go:172] (0xc0009dc0a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0310 21:30:56.265300 1153 log.go:172] (0xc0009960b0) Data frame received for 5\nI0310 21:30:56.265320 1153 log.go:172] (0xc0009dc0a0) (5) Data frame handling\nI0310 21:30:56.265347 1153 log.go:172] (0xc0009dc0a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0310 21:30:56.265383 1153 log.go:172] (0xc0009960b0) Data frame received for 3\nI0310 21:30:56.265405 1153 log.go:172] (0xc0009dc000) (3) Data frame handling\nI0310 21:30:56.265591 1153 log.go:172] (0xc0009960b0) Data frame received for 5\nI0310 21:30:56.265605 1153 log.go:172] (0xc0009dc0a0) (5) Data frame handling\nI0310 21:30:56.267224 1153 log.go:172] (0xc0009960b0) Data frame received for 1\nI0310 21:30:56.267400 1153 log.go:172] (0xc0009a6000) (1) Data frame handling\nI0310 21:30:56.267415 1153 log.go:172] (0xc0009a6000) (1) Data frame sent\nI0310 21:30:56.267423 1153 log.go:172] (0xc0009960b0) (0xc0009a6000) Stream removed, broadcasting: 1\nI0310 21:30:56.267432 1153 log.go:172] (0xc0009960b0) Go away received\nI0310 21:30:56.267766 1153 log.go:172] (0xc0009960b0) (0xc0009a6000) Stream removed, broadcasting: 1\nI0310 21:30:56.267783 1153 log.go:172] (0xc0009960b0) (0xc0009dc000) Stream removed, broadcasting: 3\nI0310 21:30:56.267792 1153 log.go:172] (0xc0009960b0) (0xc0009dc0a0) Stream removed, broadcasting: 5\n" Mar 10 21:30:56.271: INFO: stdout: "" Mar 10 21:30:56.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3336 execpod7sfsd -- /bin/sh -x -c nc -zv -t -w 2 10.111.175.31 80' Mar 10 21:30:56.461: INFO: stderr: "I0310 21:30:56.387477 1176 log.go:172] (0xc000b93970) (0xc000a16820) Create stream\nI0310 21:30:56.387514 1176 log.go:172] (0xc000b93970) (0xc000a16820) Stream added, broadcasting: 1\nI0310 21:30:56.390828 1176 log.go:172] (0xc000b93970) Reply frame received for 1\nI0310 21:30:56.390869 1176 log.go:172] (0xc000b93970) (0xc00065a6e0) Create stream\nI0310 21:30:56.390881 1176 log.go:172] (0xc000b93970) (0xc00065a6e0) Stream added, broadcasting: 3\nI0310 21:30:56.391641 1176 log.go:172] (0xc000b93970) Reply frame received for 3\nI0310 21:30:56.391664 1176 log.go:172] (0xc000b93970) (0xc0007834a0) Create stream\nI0310 21:30:56.391669 1176 log.go:172] (0xc000b93970) (0xc0007834a0) Stream added, broadcasting: 5\nI0310 21:30:56.392399 1176 log.go:172] (0xc000b93970) Reply frame received for 5\nI0310 21:30:56.456166 1176 log.go:172] (0xc000b93970) Data frame received for 5\nI0310 21:30:56.456190 1176 log.go:172] (0xc0007834a0) (5) Data frame handling\nI0310 21:30:56.456199 1176 log.go:172] (0xc0007834a0) (5) Data frame sent\nI0310 21:30:56.456204 1176 log.go:172] (0xc000b93970) Data frame received for 5\nI0310 21:30:56.456208 1176 log.go:172] (0xc0007834a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.175.31 80\nConnection to 10.111.175.31 80 port [tcp/http] succeeded!\nI0310 21:30:56.456228 1176 log.go:172] (0xc000b93970) Data frame received for 3\nI0310 21:30:56.456235 1176 log.go:172] (0xc00065a6e0) (3) Data frame handling\nI0310 21:30:56.457318 1176 log.go:172] (0xc000b93970) Data frame received for 1\nI0310 21:30:56.457340 1176 log.go:172] (0xc000a16820) (1) Data frame handling\nI0310 21:30:56.457360 1176 log.go:172] (0xc000a16820) (1) Data frame sent\nI0310 21:30:56.457373 1176 log.go:172] (0xc000b93970) (0xc000a16820) Stream removed, broadcasting: 1\nI0310 21:30:56.457385 1176 log.go:172] (0xc000b93970) Go away received\nI0310 21:30:56.457718 1176 log.go:172] (0xc000b93970) (0xc000a16820) Stream removed, broadcasting: 1\nI0310 21:30:56.457734 1176 log.go:172] (0xc000b93970) (0xc00065a6e0) Stream removed, broadcasting: 3\nI0310 21:30:56.457739 1176 log.go:172] (0xc000b93970) (0xc0007834a0) Stream removed, broadcasting: 5\n" Mar 10 21:30:56.461: INFO: stdout: "" Mar 10 21:30:56.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3336 execpod7sfsd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.4 30067' Mar 10 21:30:56.630: INFO: stderr: "I0310 21:30:56.556719 1198 log.go:172] (0xc000a3d600) (0xc000ab8820) Create stream\nI0310 21:30:56.556764 1198 log.go:172] (0xc000a3d600) (0xc000ab8820) Stream added, broadcasting: 1\nI0310 21:30:56.561093 1198 log.go:172] (0xc000a3d600) Reply frame received for 1\nI0310 21:30:56.561117 1198 log.go:172] (0xc000a3d600) (0xc000775540) Create stream\nI0310 21:30:56.561124 1198 log.go:172] (0xc000a3d600) (0xc000775540) Stream added, broadcasting: 3\nI0310 21:30:56.562132 1198 log.go:172] (0xc000a3d600) Reply frame received for 3\nI0310 21:30:56.562163 1198 log.go:172] (0xc000a3d600) (0xc0007755e0) Create stream\nI0310 21:30:56.562171 1198 log.go:172] (0xc000a3d600) (0xc0007755e0) Stream added, broadcasting: 5\nI0310 21:30:56.563289 1198 log.go:172] (0xc000a3d600) Reply frame received for 5\nI0310 21:30:56.625109 1198 log.go:172] (0xc000a3d600) Data frame received for 3\nI0310 21:30:56.625130 1198 log.go:172] (0xc000775540) (3) Data frame handling\nI0310 21:30:56.625160 1198 log.go:172] (0xc000a3d600) Data frame received for 5\nI0310 21:30:56.625190 1198 log.go:172] (0xc0007755e0) (5) Data frame handling\nI0310 21:30:56.625209 1198 log.go:172] (0xc0007755e0) (5) Data frame sent\nI0310 21:30:56.625218 1198 log.go:172] (0xc000a3d600) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.4 30067\nConnection to 172.17.0.4 30067 port [tcp/30067] succeeded!\nI0310 21:30:56.625224 1198 log.go:172] (0xc0007755e0) (5) Data frame handling\nI0310 21:30:56.626481 1198 log.go:172] (0xc000a3d600) Data frame received for 1\nI0310 21:30:56.626496 1198 log.go:172] (0xc000ab8820) (1) Data frame handling\nI0310 21:30:56.626503 1198 log.go:172] (0xc000ab8820) (1) Data frame sent\nI0310 21:30:56.626515 1198 log.go:172] (0xc000a3d600) (0xc000ab8820) Stream removed, broadcasting: 1\nI0310 21:30:56.626525 1198 log.go:172] (0xc000a3d600) Go away received\nI0310 21:30:56.626859 1198 log.go:172] (0xc000a3d600) (0xc000ab8820) Stream removed, broadcasting: 1\nI0310 21:30:56.626876 1198 log.go:172] (0xc000a3d600) (0xc000775540) Stream removed, broadcasting: 3\nI0310 21:30:56.626886 1198 log.go:172] (0xc000a3d600) (0xc0007755e0) Stream removed, broadcasting: 5\n" Mar 10 21:30:56.631: INFO: stdout: "" Mar 10 21:30:56.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3336 execpod7sfsd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.5 30067' Mar 10 21:30:56.809: INFO: stderr: "I0310 21:30:56.745621 1218 log.go:172] (0xc0003bed10) (0xc000741400) Create stream\nI0310 21:30:56.745675 1218 log.go:172] (0xc0003bed10) (0xc000741400) Stream added, broadcasting: 1\nI0310 21:30:56.747783 1218 log.go:172] (0xc0003bed10) Reply frame received for 1\nI0310 21:30:56.747810 1218 log.go:172] (0xc0003bed10) (0xc0007159a0) Create stream\nI0310 21:30:56.747820 1218 log.go:172] (0xc0003bed10) (0xc0007159a0) Stream added, broadcasting: 3\nI0310 21:30:56.748523 1218 log.go:172] (0xc0003bed10) Reply frame received for 3\nI0310 21:30:56.748550 1218 log.go:172] (0xc0003bed10) (0xc0002aa000) Create stream\nI0310 21:30:56.748561 1218 log.go:172] (0xc0003bed10) (0xc0002aa000) Stream added, broadcasting: 5\nI0310 21:30:56.749373 1218 log.go:172] (0xc0003bed10) Reply frame received for 5\nI0310 21:30:56.804002 1218 log.go:172] (0xc0003bed10) Data frame received for 5\nI0310 21:30:56.804023 1218 log.go:172] (0xc0002aa000) (5) Data frame handling\nI0310 21:30:56.804030 1218 log.go:172] (0xc0002aa000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.5 30067\nConnection to 172.17.0.5 30067 port [tcp/30067] succeeded!\nI0310 21:30:56.804363 1218 log.go:172] (0xc0003bed10) Data frame received for 5\nI0310 21:30:56.804385 1218 log.go:172] (0xc0002aa000) (5) Data frame handling\nI0310 21:30:56.804408 1218 log.go:172] (0xc0003bed10) Data frame received for 3\nI0310 21:30:56.804419 1218 log.go:172] (0xc0007159a0) (3) Data frame handling\nI0310 21:30:56.805892 1218 log.go:172] (0xc0003bed10) Data frame received for 1\nI0310 21:30:56.805911 1218 log.go:172] (0xc000741400) (1) Data frame handling\nI0310 21:30:56.805924 1218 log.go:172] (0xc000741400) (1) Data frame sent\nI0310 21:30:56.805944 1218 log.go:172] (0xc0003bed10) (0xc000741400) Stream removed, broadcasting: 1\nI0310 21:30:56.805961 1218 log.go:172] (0xc0003bed10) Go away received\nI0310 21:30:56.806409 1218 log.go:172] (0xc0003bed10) (0xc000741400) Stream removed, broadcasting: 1\nI0310 21:30:56.806432 1218 log.go:172] (0xc0003bed10) (0xc0007159a0) Stream removed, broadcasting: 3\nI0310 21:30:56.806495 1218 log.go:172] (0xc0003bed10) (0xc0002aa000) Stream removed, broadcasting: 5\n" Mar 10 21:30:56.810: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:30:56.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3336" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.096 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":69,"skipped":1208,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:30:56.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:13.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9908" for this suite. • [SLOW TEST:16.253 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":70,"skipped":1212,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:13.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 10 21:31:13.129: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix638736932/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:13.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2203" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":71,"skipped":1222,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:13.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:31:13.301: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:14.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-377" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":72,"skipped":1244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:14.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 10 21:31:14.441: INFO: Waiting up to 5m0s for pod "downward-api-04ef2c71-2c36-49e5-92aa-b26101ecd0aa" in namespace "downward-api-9088" to be "success or failure" Mar 10 21:31:14.445: INFO: Pod "downward-api-04ef2c71-2c36-49e5-92aa-b26101ecd0aa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.724484ms Mar 10 21:31:16.447: INFO: Pod "downward-api-04ef2c71-2c36-49e5-92aa-b26101ecd0aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006213137s STEP: Saw pod success Mar 10 21:31:16.447: INFO: Pod "downward-api-04ef2c71-2c36-49e5-92aa-b26101ecd0aa" satisfied condition "success or failure" Mar 10 21:31:16.448: INFO: Trying to get logs from node jerma-worker pod downward-api-04ef2c71-2c36-49e5-92aa-b26101ecd0aa container dapi-container: STEP: delete the pod Mar 10 21:31:16.469: INFO: Waiting for pod downward-api-04ef2c71-2c36-49e5-92aa-b26101ecd0aa to disappear Mar 10 21:31:16.474: INFO: Pod downward-api-04ef2c71-2c36-49e5-92aa-b26101ecd0aa no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:16.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9088" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1284,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:16.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 10 21:31:16.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 10 21:31:16.721: INFO: stderr: "" Mar 10 21:31:16.721: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:16.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1334" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":74,"skipped":1288,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:16.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-490 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-490 STEP: Creating statefulset with conflicting port in namespace statefulset-490 STEP: Waiting until pod test-pod will start running in namespace statefulset-490 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-490 Mar 10 21:31:20.885: INFO: Observed stateful pod in namespace: statefulset-490, name: ss-0, uid: dac8ffd1-aaf0-4a04-bd7f-050ffbc7a126, status phase: Failed. Waiting for statefulset controller to delete. Mar 10 21:31:20.888: INFO: Observed stateful pod in namespace: statefulset-490, name: ss-0, uid: dac8ffd1-aaf0-4a04-bd7f-050ffbc7a126, status phase: Failed. Waiting for statefulset controller to delete. Mar 10 21:31:20.911: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-490 STEP: Removing pod with conflicting port in namespace statefulset-490 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-490 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 10 21:31:22.973: INFO: Deleting all statefulset in ns statefulset-490 Mar 10 21:31:22.976: INFO: Scaling statefulset ss to 0 Mar 10 21:31:43.006: INFO: Waiting for statefulset status.replicas updated to 0 Mar 10 21:31:43.008: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:43.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-490" for this suite. • [SLOW TEST:26.299 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":75,"skipped":1288,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:43.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 10 21:31:45.096: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:45.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7327" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1298,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:45.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-71cfde54-e63a-47f4-8d0a-fc2d2923feaa STEP: Creating a pod to test consume secrets Mar 10 21:31:45.185: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b8e37abc-ce54-4815-8fe4-59877a908dba" in namespace "projected-4782" to be "success or failure" Mar 10 21:31:45.188: INFO: Pod "pod-projected-secrets-b8e37abc-ce54-4815-8fe4-59877a908dba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.577307ms Mar 10 21:31:47.191: INFO: Pod "pod-projected-secrets-b8e37abc-ce54-4815-8fe4-59877a908dba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00634519s STEP: Saw pod success Mar 10 21:31:47.191: INFO: Pod "pod-projected-secrets-b8e37abc-ce54-4815-8fe4-59877a908dba" satisfied condition "success or failure" Mar 10 21:31:47.194: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-b8e37abc-ce54-4815-8fe4-59877a908dba container projected-secret-volume-test: STEP: delete the pod Mar 10 21:31:47.234: INFO: Waiting for pod pod-projected-secrets-b8e37abc-ce54-4815-8fe4-59877a908dba to disappear Mar 10 21:31:47.241: INFO: Pod pod-projected-secrets-b8e37abc-ce54-4815-8fe4-59877a908dba no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:47.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4782" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1317,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:47.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 10 21:31:47.327: INFO: Waiting up to 5m0s for pod "pod-e32ad06b-c603-4c07-9ff4-91b7175b53ae" in namespace "emptydir-8313" to be "success or failure" Mar 10 21:31:47.343: INFO: Pod "pod-e32ad06b-c603-4c07-9ff4-91b7175b53ae": Phase="Pending", Reason="", readiness=false. Elapsed: 15.59428ms Mar 10 21:31:49.347: INFO: Pod "pod-e32ad06b-c603-4c07-9ff4-91b7175b53ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019768102s STEP: Saw pod success Mar 10 21:31:49.347: INFO: Pod "pod-e32ad06b-c603-4c07-9ff4-91b7175b53ae" satisfied condition "success or failure" Mar 10 21:31:49.352: INFO: Trying to get logs from node jerma-worker pod pod-e32ad06b-c603-4c07-9ff4-91b7175b53ae container test-container: STEP: delete the pod Mar 10 21:31:49.390: INFO: Waiting for pod pod-e32ad06b-c603-4c07-9ff4-91b7175b53ae to disappear Mar 10 21:31:49.428: INFO: Pod pod-e32ad06b-c603-4c07-9ff4-91b7175b53ae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:49.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8313" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1323,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:49.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 10 21:31:49.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5715' Mar 10 21:31:49.628: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 10 21:31:49.628: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 Mar 10 21:31:51.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5715' Mar 10 21:31:51.810: INFO: stderr: "" Mar 10 21:31:51.810: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:51.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5715" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":79,"skipped":1329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:51.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-da301c30-9205-4b23-8d21-d417bf94e550 STEP: Creating a pod to test consume secrets Mar 10 21:31:51.915: INFO: Waiting up to 5m0s for pod "pod-secrets-a9f9d976-97ef-4f7d-b798-0eac8d589803" in namespace "secrets-5842" to be "success or failure" Mar 10 21:31:51.919: INFO: Pod "pod-secrets-a9f9d976-97ef-4f7d-b798-0eac8d589803": Phase="Pending", Reason="", readiness=false. Elapsed: 3.702457ms Mar 10 21:31:53.922: INFO: Pod "pod-secrets-a9f9d976-97ef-4f7d-b798-0eac8d589803": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007174924s Mar 10 21:31:55.926: INFO: Pod "pod-secrets-a9f9d976-97ef-4f7d-b798-0eac8d589803": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010930092s STEP: Saw pod success Mar 10 21:31:55.926: INFO: Pod "pod-secrets-a9f9d976-97ef-4f7d-b798-0eac8d589803" satisfied condition "success or failure" Mar 10 21:31:55.929: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a9f9d976-97ef-4f7d-b798-0eac8d589803 container secret-volume-test: STEP: delete the pod Mar 10 21:31:55.997: INFO: Waiting for pod pod-secrets-a9f9d976-97ef-4f7d-b798-0eac8d589803 to disappear Mar 10 21:31:55.999: INFO: Pod pod-secrets-a9f9d976-97ef-4f7d-b798-0eac8d589803 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:55.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5842" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1352,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:56.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:31:56.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 10 21:31:56.166: INFO: stderr: "" Mar 10 21:31:56.166: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:31:56.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8958" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":81,"skipped":1364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:31:56.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 10 21:31:56.695: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 10 21:31:58.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472716, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472716, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472716, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472716, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:32:01.746: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:32:01.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:32:02.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4015" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.783 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":82,"skipped":1406,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:32:02.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 10 21:32:03.301: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-282 /api/v1/namespaces/watch-282/configmaps/e2e-watch-test-label-changed e4d20069-d882-4b49-9dba-b256c9b28dda 674483 0 2020-03-10 21:32:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 10 21:32:03.302: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-282 /api/v1/namespaces/watch-282/configmaps/e2e-watch-test-label-changed e4d20069-d882-4b49-9dba-b256c9b28dda 674484 0 2020-03-10 21:32:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 10 21:32:03.302: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-282 /api/v1/namespaces/watch-282/configmaps/e2e-watch-test-label-changed e4d20069-d882-4b49-9dba-b256c9b28dda 674486 0 2020-03-10 21:32:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 10 21:32:13.470: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-282 /api/v1/namespaces/watch-282/configmaps/e2e-watch-test-label-changed e4d20069-d882-4b49-9dba-b256c9b28dda 674532 0 2020-03-10 21:32:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 10 21:32:13.470: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-282 /api/v1/namespaces/watch-282/configmaps/e2e-watch-test-label-changed e4d20069-d882-4b49-9dba-b256c9b28dda 674533 0 2020-03-10 21:32:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 10 21:32:13.470: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-282 /api/v1/namespaces/watch-282/configmaps/e2e-watch-test-label-changed e4d20069-d882-4b49-9dba-b256c9b28dda 674534 0 2020-03-10 21:32:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:32:13.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-282" for this suite. • [SLOW TEST:10.521 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":83,"skipped":1424,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:32:13.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-9077 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9077 to expose endpoints map[] Mar 10 21:32:13.583: INFO: Get endpoints failed (33.125331ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 10 21:32:14.587: INFO: successfully validated that service multi-endpoint-test in namespace services-9077 exposes endpoints map[] (1.036954475s elapsed) STEP: Creating pod pod1 in namespace services-9077 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9077 to expose endpoints map[pod1:[100]] Mar 10 21:32:16.620: INFO: successfully validated that service multi-endpoint-test in namespace services-9077 exposes endpoints map[pod1:[100]] (2.026504959s elapsed) STEP: Creating pod pod2 in namespace services-9077 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9077 to expose endpoints map[pod1:[100] pod2:[101]] Mar 10 21:32:18.718: INFO: successfully validated that service multi-endpoint-test in namespace services-9077 exposes endpoints map[pod1:[100] pod2:[101]] (2.094851106s elapsed) STEP: Deleting pod pod1 in namespace services-9077 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9077 to expose endpoints map[pod2:[101]] Mar 10 21:32:19.771: INFO: successfully validated that service multi-endpoint-test in namespace services-9077 exposes endpoints map[pod2:[101]] (1.050063814s elapsed) STEP: Deleting pod pod2 in namespace services-9077 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9077 to expose endpoints map[] Mar 10 21:32:20.828: INFO: successfully validated that service multi-endpoint-test in namespace services-9077 exposes endpoints map[] (1.051958818s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:32:20.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9077" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.391 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":84,"skipped":1440,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:32:20.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:32:44.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8725" for this suite. • [SLOW TEST:23.475 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1455,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:32:44.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:32:44.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7912" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":86,"skipped":1462,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:32:44.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 10 21:32:44.471: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 10 21:32:44.496: INFO: Waiting for terminating namespaces to be deleted... Mar 10 21:32:44.497: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 10 21:32:44.502: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:32:44.502: INFO: Container kube-proxy ready: true, restart count 0 Mar 10 21:32:44.502: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:32:44.502: INFO: Container kindnet-cni ready: true, restart count 0 Mar 10 21:32:44.502: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 10 21:32:44.516: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:32:44.516: INFO: Container kube-proxy ready: true, restart count 0 Mar 10 21:32:44.516: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:32:44.516: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0b13603c-1cda-413e-bc6e-a5fcb48b7d67 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-0b13603c-1cda-413e-bc6e-a5fcb48b7d67 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-0b13603c-1cda-413e-bc6e-a5fcb48b7d67 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:32:54.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5225" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.280 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":87,"skipped":1463,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:32:54.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 10 21:32:54.784: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:32:59.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4439" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":88,"skipped":1476,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:32:59.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 10 21:33:01.753: INFO: Successfully updated pod "labelsupdatec862db83-5fa9-4385-937d-6568bfbf2cdb" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:33:03.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4460" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1478,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:33:03.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:33:04.474: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Mar 10 21:33:06.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472784, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472784, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472784, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472784, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:33:09.536: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:33:21.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6775" for this suite. STEP: Destroying namespace "webhook-6775-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.126 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":90,"skipped":1539,"failed":0} SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:33:21.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 10 21:33:22.481: INFO: created pod pod-service-account-defaultsa Mar 10 21:33:22.481: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 10 21:33:22.513: INFO: created pod pod-service-account-mountsa Mar 10 21:33:22.513: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 10 21:33:22.533: INFO: created pod pod-service-account-nomountsa Mar 10 21:33:22.533: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 10 21:33:22.582: INFO: created pod pod-service-account-defaultsa-mountspec Mar 10 21:33:22.582: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 10 21:33:22.593: INFO: created pod pod-service-account-mountsa-mountspec Mar 10 21:33:22.593: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 10 21:33:22.647: INFO: created pod pod-service-account-nomountsa-mountspec Mar 10 21:33:22.647: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 10 21:33:22.704: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 10 21:33:22.704: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 10 21:33:22.738: INFO: created pod pod-service-account-mountsa-nomountspec Mar 10 21:33:22.738: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 10 21:33:22.795: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 10 21:33:22.795: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:33:22.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9034" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":91,"skipped":1549,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:33:22.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-30466754-43a3-4ea9-b0e4-87e67d3dfc60 STEP: Creating a pod to test consume configMaps Mar 10 21:33:23.089: INFO: Waiting up to 5m0s for pod "pod-configmaps-89c5724d-d99f-44b8-af24-244c61a13c8b" in namespace "configmap-404" to be "success or failure" Mar 10 21:33:23.147: INFO: Pod "pod-configmaps-89c5724d-d99f-44b8-af24-244c61a13c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.865569ms Mar 10 21:33:25.165: INFO: Pod "pod-configmaps-89c5724d-d99f-44b8-af24-244c61a13c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076822799s Mar 10 21:33:27.191: INFO: Pod "pod-configmaps-89c5724d-d99f-44b8-af24-244c61a13c8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102391411s STEP: Saw pod success Mar 10 21:33:27.191: INFO: Pod "pod-configmaps-89c5724d-d99f-44b8-af24-244c61a13c8b" satisfied condition "success or failure" Mar 10 21:33:27.194: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-89c5724d-d99f-44b8-af24-244c61a13c8b container configmap-volume-test: STEP: delete the pod Mar 10 21:33:27.217: INFO: Waiting for pod pod-configmaps-89c5724d-d99f-44b8-af24-244c61a13c8b to disappear Mar 10 21:33:27.223: INFO: Pod pod-configmaps-89c5724d-d99f-44b8-af24-244c61a13c8b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:33:27.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-404" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:33:27.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0d474b1b-b08c-41bc-8eea-c82955b49e83 STEP: Creating a pod to test consume configMaps Mar 10 21:33:27.361: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a2702ae-fd0d-41a5-abed-cb58af7c51dd" in namespace "projected-3379" to be "success or failure" Mar 10 21:33:27.366: INFO: Pod "pod-projected-configmaps-2a2702ae-fd0d-41a5-abed-cb58af7c51dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.580746ms Mar 10 21:33:29.370: INFO: Pod "pod-projected-configmaps-2a2702ae-fd0d-41a5-abed-cb58af7c51dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008517146s STEP: Saw pod success Mar 10 21:33:29.370: INFO: Pod "pod-projected-configmaps-2a2702ae-fd0d-41a5-abed-cb58af7c51dd" satisfied condition "success or failure" Mar 10 21:33:29.373: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-2a2702ae-fd0d-41a5-abed-cb58af7c51dd container projected-configmap-volume-test: STEP: delete the pod Mar 10 21:33:29.403: INFO: Waiting for pod pod-projected-configmaps-2a2702ae-fd0d-41a5-abed-cb58af7c51dd to disappear Mar 10 21:33:29.424: INFO: Pod pod-projected-configmaps-2a2702ae-fd0d-41a5-abed-cb58af7c51dd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:33:29.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3379" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1590,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:33:29.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:33:29.504: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f076970-b905-49cd-a452-27e91eec14c0" in namespace "downward-api-9277" to be "success or failure" Mar 10 21:33:29.515: INFO: Pod "downwardapi-volume-2f076970-b905-49cd-a452-27e91eec14c0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.232461ms Mar 10 21:33:31.519: INFO: Pod "downwardapi-volume-2f076970-b905-49cd-a452-27e91eec14c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015235942s STEP: Saw pod success Mar 10 21:33:31.519: INFO: Pod "downwardapi-volume-2f076970-b905-49cd-a452-27e91eec14c0" satisfied condition "success or failure" Mar 10 21:33:31.521: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-2f076970-b905-49cd-a452-27e91eec14c0 container client-container: STEP: delete the pod Mar 10 21:33:31.569: INFO: Waiting for pod downwardapi-volume-2f076970-b905-49cd-a452-27e91eec14c0 to disappear Mar 10 21:33:31.581: INFO: Pod downwardapi-volume-2f076970-b905-49cd-a452-27e91eec14c0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:33:31.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9277" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1599,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:33:31.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-dcd866a7-1889-4fb6-b87a-feb6e51479ca STEP: Creating a pod to test consume secrets Mar 10 21:33:31.684: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-56430da4-d32d-440c-afdf-68d76a826b93" in namespace "projected-5602" to be "success or failure" Mar 10 21:33:31.723: INFO: Pod "pod-projected-secrets-56430da4-d32d-440c-afdf-68d76a826b93": Phase="Pending", Reason="", readiness=false. Elapsed: 38.954867ms Mar 10 21:33:33.733: INFO: Pod "pod-projected-secrets-56430da4-d32d-440c-afdf-68d76a826b93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.04902708s STEP: Saw pod success Mar 10 21:33:33.733: INFO: Pod "pod-projected-secrets-56430da4-d32d-440c-afdf-68d76a826b93" satisfied condition "success or failure" Mar 10 21:33:33.744: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-56430da4-d32d-440c-afdf-68d76a826b93 container projected-secret-volume-test: STEP: delete the pod Mar 10 21:33:33.809: INFO: Waiting for pod pod-projected-secrets-56430da4-d32d-440c-afdf-68d76a826b93 to disappear Mar 10 21:33:33.815: INFO: Pod pod-projected-secrets-56430da4-d32d-440c-afdf-68d76a826b93 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:33:33.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5602" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1611,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:33:33.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:33:44.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6750" for this suite. • [SLOW TEST:11.116 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":96,"skipped":1611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:33:44.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 10 21:33:44.967: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 10 21:33:45.045: INFO: Waiting for terminating namespaces to be deleted... Mar 10 21:33:45.048: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 10 21:33:45.052: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:33:45.052: INFO: Container kube-proxy ready: true, restart count 0 Mar 10 21:33:45.052: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:33:45.052: INFO: Container kindnet-cni ready: true, restart count 0 Mar 10 21:33:45.052: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 10 21:33:45.057: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:33:45.057: INFO: Container kube-proxy ready: true, restart count 0 Mar 10 21:33:45.057: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:33:45.057: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7981783d-ba58-440c-a057-0cb2b8a18a78 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7981783d-ba58-440c-a057-0cb2b8a18a78 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7981783d-ba58-440c-a057-0cb2b8a18a78 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:33:51.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3806" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:6.278 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":97,"skipped":1657,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:33:51.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-d3199df7-16a3-4b6d-9766-11fb47612d6a [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:33:51.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-588" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":98,"skipped":1667,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:33:51.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 10 21:33:51.346: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7566" to be "success or failure" Mar 10 21:33:51.387: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 41.330443ms Mar 10 21:33:53.391: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.045153828s STEP: Saw pod success Mar 10 21:33:53.391: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 10 21:33:53.393: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 10 21:33:53.436: INFO: Waiting for pod pod-host-path-test to disappear Mar 10 21:33:53.444: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:33:53.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7566" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1716,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:33:53.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5992.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5992.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5992.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5992.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5992.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5992.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 10 21:33:57.627: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:33:57.629: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:33:57.631: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:33:57.634: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:33:57.642: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:33:57.645: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:33:57.647: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:33:57.650: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:33:57.655: INFO: Lookups using dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local] Mar 10 21:34:02.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:02.663: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:02.669: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:02.673: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:02.682: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:02.684: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:02.687: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:02.689: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:02.694: INFO: Lookups using dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local] Mar 10 21:34:07.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:07.664: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:07.667: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:07.670: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:07.679: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:07.682: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:07.685: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:07.688: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:07.694: INFO: Lookups using dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local] Mar 10 21:34:12.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:12.663: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:12.666: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:12.669: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:12.677: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:12.680: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:12.683: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:12.686: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:12.692: INFO: Lookups using dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local] Mar 10 21:34:17.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:17.664: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:17.668: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:17.671: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:17.680: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:17.684: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:17.687: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:17.690: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:17.703: INFO: Lookups using dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local] Mar 10 21:34:22.659: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:22.662: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:22.666: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:22.669: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:22.682: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:22.684: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:22.687: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:22.689: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local from pod dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3: the server could not find the requested resource (get pods dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3) Mar 10 21:34:22.695: INFO: Lookups using dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5992.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5992.svc.cluster.local jessie_udp@dns-test-service-2.dns-5992.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5992.svc.cluster.local] Mar 10 21:34:27.693: INFO: DNS probes using dns-5992/dns-test-fe6ca5cf-616c-4784-89f1-d9f788073fa3 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:34:27.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5992" for this suite. • [SLOW TEST:34.377 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":100,"skipped":1750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:34:27.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-1e9748f3-c682-4aaa-9f72-e91fd64594c6 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-1e9748f3-c682-4aaa-9f72-e91fd64594c6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:34:31.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8329" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1780,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:34:31.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:34:39.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7697" for this suite. • [SLOW TEST:7.153 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":102,"skipped":1783,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:34:39.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-rcd5 STEP: Creating a pod to test atomic-volume-subpath Mar 10 21:34:39.221: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rcd5" in namespace "subpath-5860" to be "success or failure" Mar 10 21:34:39.238: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.143878ms Mar 10 21:34:41.242: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020859124s Mar 10 21:34:43.245: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Running", Reason="", readiness=true. Elapsed: 4.024381138s Mar 10 21:34:45.249: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Running", Reason="", readiness=true. Elapsed: 6.028070639s Mar 10 21:34:47.252: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Running", Reason="", readiness=true. Elapsed: 8.031477518s Mar 10 21:34:49.257: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Running", Reason="", readiness=true. Elapsed: 10.035817885s Mar 10 21:34:51.261: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Running", Reason="", readiness=true. Elapsed: 12.039749756s Mar 10 21:34:53.264: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Running", Reason="", readiness=true. Elapsed: 14.04322531s Mar 10 21:34:55.268: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Running", Reason="", readiness=true. Elapsed: 16.047565042s Mar 10 21:34:57.272: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Running", Reason="", readiness=true. Elapsed: 18.050964267s Mar 10 21:34:59.276: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Running", Reason="", readiness=true. Elapsed: 20.054720411s Mar 10 21:35:01.280: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Running", Reason="", readiness=true. Elapsed: 22.058767946s Mar 10 21:35:03.283: INFO: Pod "pod-subpath-test-downwardapi-rcd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.062301199s STEP: Saw pod success Mar 10 21:35:03.283: INFO: Pod "pod-subpath-test-downwardapi-rcd5" satisfied condition "success or failure" Mar 10 21:35:03.286: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-rcd5 container test-container-subpath-downwardapi-rcd5: STEP: delete the pod Mar 10 21:35:03.304: INFO: Waiting for pod pod-subpath-test-downwardapi-rcd5 to disappear Mar 10 21:35:03.333: INFO: Pod pod-subpath-test-downwardapi-rcd5 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rcd5 Mar 10 21:35:03.333: INFO: Deleting pod "pod-subpath-test-downwardapi-rcd5" in namespace "subpath-5860" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:35:03.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5860" for this suite. • [SLOW TEST:24.206 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":103,"skipped":1786,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:35:03.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:35:04.171: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 10 21:35:06.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472904, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472904, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472904, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472904, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:35:09.219: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:35:09.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3546" for this suite. STEP: Destroying namespace "webhook-3546-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.084 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":104,"skipped":1790,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:35:09.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:35:09.496: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47987f8d-da72-4537-bb9a-ce9a7bbb0178" in namespace "projected-6758" to be "success or failure" Mar 10 21:35:09.519: INFO: Pod "downwardapi-volume-47987f8d-da72-4537-bb9a-ce9a7bbb0178": Phase="Pending", Reason="", readiness=false. Elapsed: 23.003722ms Mar 10 21:35:11.523: INFO: Pod "downwardapi-volume-47987f8d-da72-4537-bb9a-ce9a7bbb0178": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026555746s STEP: Saw pod success Mar 10 21:35:11.523: INFO: Pod "downwardapi-volume-47987f8d-da72-4537-bb9a-ce9a7bbb0178" satisfied condition "success or failure" Mar 10 21:35:11.526: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-47987f8d-da72-4537-bb9a-ce9a7bbb0178 container client-container: STEP: delete the pod Mar 10 21:35:11.586: INFO: Waiting for pod downwardapi-volume-47987f8d-da72-4537-bb9a-ce9a7bbb0178 to disappear Mar 10 21:35:11.610: INFO: Pod downwardapi-volume-47987f8d-da72-4537-bb9a-ce9a7bbb0178 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:35:11.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6758" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1807,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:35:11.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 10 21:35:11.671: INFO: Waiting up to 5m0s for pod "pod-9ed2f948-6f15-4549-adc3-6627613fd346" in namespace "emptydir-9858" to be "success or failure" Mar 10 21:35:11.699: INFO: Pod "pod-9ed2f948-6f15-4549-adc3-6627613fd346": Phase="Pending", Reason="", readiness=false. Elapsed: 27.863264ms Mar 10 21:35:13.702: INFO: Pod "pod-9ed2f948-6f15-4549-adc3-6627613fd346": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.031559455s STEP: Saw pod success Mar 10 21:35:13.703: INFO: Pod "pod-9ed2f948-6f15-4549-adc3-6627613fd346" satisfied condition "success or failure" Mar 10 21:35:13.706: INFO: Trying to get logs from node jerma-worker2 pod pod-9ed2f948-6f15-4549-adc3-6627613fd346 container test-container: STEP: delete the pod Mar 10 21:35:13.724: INFO: Waiting for pod pod-9ed2f948-6f15-4549-adc3-6627613fd346 to disappear Mar 10 21:35:13.728: INFO: Pod pod-9ed2f948-6f15-4549-adc3-6627613fd346 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:35:13.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9858" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1819,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:35:13.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:35:14.263: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 10 21:35:16.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472914, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472914, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472914, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719472914, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:35:19.301: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 10 21:35:19.324: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:35:19.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-198" for this suite. STEP: Destroying namespace "webhook-198-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.714 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":107,"skipped":1841,"failed":0} [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:35:19.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:35:19.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 10 21:35:19.643: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-10T21:35:19Z generation:1 name:name1 resourceVersion:676028 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:71f5f6a8-c04e-4c2b-aa00-d7e1e5e07821] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 10 21:35:29.648: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-10T21:35:29Z generation:1 name:name2 resourceVersion:676074 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ca62168c-8aad-4ece-91c0-c8c9962cf81f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 10 21:35:39.654: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-10T21:35:19Z generation:2 name:name1 resourceVersion:676104 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:71f5f6a8-c04e-4c2b-aa00-d7e1e5e07821] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 10 21:35:49.661: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-10T21:35:29Z generation:2 name:name2 resourceVersion:676134 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ca62168c-8aad-4ece-91c0-c8c9962cf81f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 10 21:35:59.669: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-10T21:35:19Z generation:2 name:name1 resourceVersion:676162 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:71f5f6a8-c04e-4c2b-aa00-d7e1e5e07821] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 10 21:36:09.675: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-10T21:35:29Z generation:2 name:name2 resourceVersion:676192 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ca62168c-8aad-4ece-91c0-c8c9962cf81f] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:36:20.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4415" for this suite. • [SLOW TEST:60.741 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":108,"skipped":1841,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:36:20.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-15c28fbb-1007-4775-a1a7-ce8bdcc3f03f STEP: Creating secret with name s-test-opt-upd-82147faf-ee01-41a7-bdfd-e0aef396e97c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-15c28fbb-1007-4775-a1a7-ce8bdcc3f03f STEP: Updating secret s-test-opt-upd-82147faf-ee01-41a7-bdfd-e0aef396e97c STEP: Creating secret with name s-test-opt-create-4bc31981-1f7e-470e-8ce7-b6f94b3496b2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:37:44.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6171" for this suite. • [SLOW TEST:84.596 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1853,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:37:44.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:37:45.599: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 10 21:37:47.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473065, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473065, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473065, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473065, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:37:50.687: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 10 21:37:52.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-5718 to-be-attached-pod -i -c=container1' Mar 10 21:37:52.893: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:37:52.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5718" for this suite. STEP: Destroying namespace "webhook-5718-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.225 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":110,"skipped":1854,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:37:53.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 10 21:37:53.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6444 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 10 21:37:54.713: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0310 21:37:54.668226 1350 log.go:172] (0xc000b1d290) (0xc000a701e0) Create stream\nI0310 21:37:54.668277 1350 log.go:172] (0xc000b1d290) (0xc000a701e0) Stream added, broadcasting: 1\nI0310 21:37:54.671310 1350 log.go:172] (0xc000b1d290) Reply frame received for 1\nI0310 21:37:54.671343 1350 log.go:172] (0xc000b1d290) (0xc0006c0000) Create stream\nI0310 21:37:54.671354 1350 log.go:172] (0xc000b1d290) (0xc0006c0000) Stream added, broadcasting: 3\nI0310 21:37:54.672275 1350 log.go:172] (0xc000b1d290) Reply frame received for 3\nI0310 21:37:54.672319 1350 log.go:172] (0xc000b1d290) (0xc00070da40) Create stream\nI0310 21:37:54.672331 1350 log.go:172] (0xc000b1d290) (0xc00070da40) Stream added, broadcasting: 5\nI0310 21:37:54.673435 1350 log.go:172] (0xc000b1d290) Reply frame received for 5\nI0310 21:37:54.673461 1350 log.go:172] (0xc000b1d290) (0xc0006c00a0) Create stream\nI0310 21:37:54.673470 1350 log.go:172] (0xc000b1d290) (0xc0006c00a0) Stream added, broadcasting: 7\nI0310 21:37:54.674720 1350 log.go:172] (0xc000b1d290) Reply frame received for 7\nI0310 21:37:54.674910 1350 log.go:172] (0xc0006c0000) (3) Writing data frame\nI0310 21:37:54.675038 1350 log.go:172] (0xc0006c0000) (3) Writing data frame\nI0310 21:37:54.675940 1350 log.go:172] (0xc000b1d290) Data frame received for 5\nI0310 21:37:54.675958 1350 log.go:172] (0xc00070da40) (5) Data frame handling\nI0310 21:37:54.675973 1350 log.go:172] (0xc00070da40) (5) Data frame sent\nI0310 21:37:54.676525 1350 log.go:172] (0xc000b1d290) Data frame received for 5\nI0310 21:37:54.676546 1350 log.go:172] (0xc00070da40) (5) Data frame handling\nI0310 21:37:54.676561 1350 log.go:172] (0xc00070da40) (5) Data frame sent\nI0310 21:37:54.691728 1350 log.go:172] (0xc000b1d290) Data frame received for 7\nI0310 21:37:54.691759 1350 log.go:172] (0xc0006c00a0) (7) Data frame handling\nI0310 21:37:54.691965 1350 log.go:172] (0xc000b1d290) Data frame received for 5\nI0310 21:37:54.691989 1350 log.go:172] (0xc00070da40) (5) Data frame handling\nI0310 21:37:54.692621 1350 log.go:172] (0xc000b1d290) Data frame received for 1\nI0310 21:37:54.692662 1350 log.go:172] (0xc000b1d290) (0xc0006c0000) Stream removed, broadcasting: 3\nI0310 21:37:54.692712 1350 log.go:172] (0xc000a701e0) (1) Data frame handling\nI0310 21:37:54.692740 1350 log.go:172] (0xc000a701e0) (1) Data frame sent\nI0310 21:37:54.692755 1350 log.go:172] (0xc000b1d290) (0xc000a701e0) Stream removed, broadcasting: 1\nI0310 21:37:54.692767 1350 log.go:172] (0xc000b1d290) Go away received\nI0310 21:37:54.693165 1350 log.go:172] (0xc000b1d290) (0xc000a701e0) Stream removed, broadcasting: 1\nI0310 21:37:54.693180 1350 log.go:172] (0xc000b1d290) (0xc0006c0000) Stream removed, broadcasting: 3\nI0310 21:37:54.693188 1350 log.go:172] (0xc000b1d290) (0xc00070da40) Stream removed, broadcasting: 5\nI0310 21:37:54.693196 1350 log.go:172] (0xc000b1d290) (0xc0006c00a0) Stream removed, broadcasting: 7\n" Mar 10 21:37:54.714: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:37:56.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6444" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":111,"skipped":1855,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:37:56.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:37:57.638: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 10 21:37:59.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473077, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473077, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473077, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473077, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:38:02.713: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:38:02.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5717" for this suite. STEP: Destroying namespace "webhook-5717-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.157 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":112,"skipped":1900,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:38:02.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:38:09.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4077" for this suite. STEP: Destroying namespace "nsdeletetest-1185" for this suite. Mar 10 21:38:09.182: INFO: Namespace nsdeletetest-1185 was already deleted STEP: Destroying namespace "nsdeletetest-3979" for this suite. • [SLOW TEST:6.303 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":113,"skipped":1916,"failed":0} SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:38:09.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-9853 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9853 STEP: Deleting pre-stop pod Mar 10 21:38:18.272: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:38:18.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9853" for this suite. • [SLOW TEST:9.104 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":114,"skipped":1918,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:38:18.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:38:18.335: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e3ed94c3-5b13-4806-ad39-efadb1806afe" in namespace "security-context-test-3984" to be "success or failure" Mar 10 21:38:18.339: INFO: Pod "alpine-nnp-false-e3ed94c3-5b13-4806-ad39-efadb1806afe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.967111ms Mar 10 21:38:20.344: INFO: Pod "alpine-nnp-false-e3ed94c3-5b13-4806-ad39-efadb1806afe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009013015s Mar 10 21:38:22.347: INFO: Pod "alpine-nnp-false-e3ed94c3-5b13-4806-ad39-efadb1806afe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012242264s Mar 10 21:38:22.347: INFO: Pod "alpine-nnp-false-e3ed94c3-5b13-4806-ad39-efadb1806afe" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:38:22.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3984" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1931,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:38:22.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9774.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9774.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9774.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9774.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9774.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9774.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 10 21:38:26.586: INFO: DNS probes using dns-9774/dns-test-ff0ad3e9-ca08-4a2a-96b7-a8be826694a9 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:38:26.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9774" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":116,"skipped":1943,"failed":0} ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:38:26.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4751 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4751 STEP: creating replication controller externalsvc in namespace services-4751 I0310 21:38:26.847897 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4751, replica count: 2 I0310 21:38:29.898346 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 10 21:38:29.962: INFO: Creating new exec pod Mar 10 21:38:32.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4751 execpodd2qbq -- /bin/sh -x -c nslookup nodeport-service' Mar 10 21:38:32.184: INFO: stderr: "I0310 21:38:32.110362 1371 log.go:172] (0xc00075a840) (0xc00095e140) Create stream\nI0310 21:38:32.110403 1371 log.go:172] (0xc00075a840) (0xc00095e140) Stream added, broadcasting: 1\nI0310 21:38:32.112687 1371 log.go:172] (0xc00075a840) Reply frame received for 1\nI0310 21:38:32.112710 1371 log.go:172] (0xc00075a840) (0xc000633a40) Create stream\nI0310 21:38:32.112718 1371 log.go:172] (0xc00075a840) (0xc000633a40) Stream added, broadcasting: 3\nI0310 21:38:32.113454 1371 log.go:172] (0xc00075a840) Reply frame received for 3\nI0310 21:38:32.113497 1371 log.go:172] (0xc00075a840) (0xc0007b21e0) Create stream\nI0310 21:38:32.113506 1371 log.go:172] (0xc00075a840) (0xc0007b21e0) Stream added, broadcasting: 5\nI0310 21:38:32.114333 1371 log.go:172] (0xc00075a840) Reply frame received for 5\nI0310 21:38:32.172371 1371 log.go:172] (0xc00075a840) Data frame received for 5\nI0310 21:38:32.172399 1371 log.go:172] (0xc0007b21e0) (5) Data frame handling\nI0310 21:38:32.172416 1371 log.go:172] (0xc0007b21e0) (5) Data frame sent\n+ nslookup nodeport-service\nI0310 21:38:32.178097 1371 log.go:172] (0xc00075a840) Data frame received for 3\nI0310 21:38:32.178143 1371 log.go:172] (0xc000633a40) (3) Data frame handling\nI0310 21:38:32.178165 1371 log.go:172] (0xc000633a40) (3) Data frame sent\nI0310 21:38:32.179170 1371 log.go:172] (0xc00075a840) Data frame received for 3\nI0310 21:38:32.179186 1371 log.go:172] (0xc000633a40) (3) Data frame handling\nI0310 21:38:32.179197 1371 log.go:172] (0xc000633a40) (3) Data frame sent\nI0310 21:38:32.179637 1371 log.go:172] (0xc00075a840) Data frame received for 3\nI0310 21:38:32.179650 1371 log.go:172] (0xc000633a40) (3) Data frame handling\nI0310 21:38:32.179863 1371 log.go:172] (0xc00075a840) Data frame received for 5\nI0310 21:38:32.179881 1371 log.go:172] (0xc0007b21e0) (5) Data frame handling\nI0310 21:38:32.181378 1371 log.go:172] (0xc00075a840) Data frame received for 1\nI0310 21:38:32.181393 1371 log.go:172] (0xc00095e140) (1) Data frame handling\nI0310 21:38:32.181403 1371 log.go:172] (0xc00095e140) (1) Data frame sent\nI0310 21:38:32.181415 1371 log.go:172] (0xc00075a840) (0xc00095e140) Stream removed, broadcasting: 1\nI0310 21:38:32.181440 1371 log.go:172] (0xc00075a840) Go away received\nI0310 21:38:32.181686 1371 log.go:172] (0xc00075a840) (0xc00095e140) Stream removed, broadcasting: 1\nI0310 21:38:32.181702 1371 log.go:172] (0xc00075a840) (0xc000633a40) Stream removed, broadcasting: 3\nI0310 21:38:32.181712 1371 log.go:172] (0xc00075a840) (0xc0007b21e0) Stream removed, broadcasting: 5\n" Mar 10 21:38:32.184: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4751.svc.cluster.local\tcanonical name = externalsvc.services-4751.svc.cluster.local.\nName:\texternalsvc.services-4751.svc.cluster.local\nAddress: 10.96.64.234\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4751, will wait for the garbage collector to delete the pods Mar 10 21:38:32.244: INFO: Deleting ReplicationController externalsvc took: 6.957133ms Mar 10 21:38:32.545: INFO: Terminating ReplicationController externalsvc pods took: 300.277795ms Mar 10 21:38:36.795: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:38:36.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4751" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.150 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":117,"skipped":1943,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:38:36.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:38:37.261: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:38:40.317: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:38:40.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3945" for this suite. STEP: Destroying namespace "webhook-3945-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":118,"skipped":1987,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:38:40.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:38:52.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-756" for this suite. • [SLOW TEST:11.234 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":119,"skipped":1990,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:38:52.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 10 21:38:52.143: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:38:55.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1037" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":120,"skipped":2005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:38:55.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-cr9f STEP: Creating a pod to test atomic-volume-subpath Mar 10 21:38:55.911: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cr9f" in namespace "subpath-3244" to be "success or failure" Mar 10 21:38:55.915: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055594ms Mar 10 21:38:57.919: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Running", Reason="", readiness=true. Elapsed: 2.00818632s Mar 10 21:38:59.923: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Running", Reason="", readiness=true. Elapsed: 4.012346424s Mar 10 21:39:01.927: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Running", Reason="", readiness=true. Elapsed: 6.016258761s Mar 10 21:39:03.931: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Running", Reason="", readiness=true. Elapsed: 8.020005652s Mar 10 21:39:05.935: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Running", Reason="", readiness=true. Elapsed: 10.023992801s Mar 10 21:39:07.939: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Running", Reason="", readiness=true. Elapsed: 12.028085966s Mar 10 21:39:09.944: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Running", Reason="", readiness=true. Elapsed: 14.032603525s Mar 10 21:39:11.948: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Running", Reason="", readiness=true. Elapsed: 16.036722244s Mar 10 21:39:13.951: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Running", Reason="", readiness=true. Elapsed: 18.040352805s Mar 10 21:39:15.955: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Running", Reason="", readiness=true. Elapsed: 20.044342551s Mar 10 21:39:17.959: INFO: Pod "pod-subpath-test-secret-cr9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.048143831s STEP: Saw pod success Mar 10 21:39:17.959: INFO: Pod "pod-subpath-test-secret-cr9f" satisfied condition "success or failure" Mar 10 21:39:17.961: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-cr9f container test-container-subpath-secret-cr9f: STEP: delete the pod Mar 10 21:39:18.013: INFO: Waiting for pod pod-subpath-test-secret-cr9f to disappear Mar 10 21:39:18.025: INFO: Pod pod-subpath-test-secret-cr9f no longer exists STEP: Deleting pod pod-subpath-test-secret-cr9f Mar 10 21:39:18.025: INFO: Deleting pod "pod-subpath-test-secret-cr9f" in namespace "subpath-3244" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:39:18.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3244" for this suite. • [SLOW TEST:22.237 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":121,"skipped":2050,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:39:18.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:39:18.985: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:39:22.067: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:39:22.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5815" for this suite. STEP: Destroying namespace "webhook-5815-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":122,"skipped":2054,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:39:22.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 10 21:39:22.349: INFO: Waiting up to 5m0s for pod "pod-a824e38a-1cca-4565-b190-f5459b4625d6" in namespace "emptydir-9035" to be "success or failure" Mar 10 21:39:22.353: INFO: Pod "pod-a824e38a-1cca-4565-b190-f5459b4625d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.898236ms Mar 10 21:39:24.356: INFO: Pod "pod-a824e38a-1cca-4565-b190-f5459b4625d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007548547s STEP: Saw pod success Mar 10 21:39:24.357: INFO: Pod "pod-a824e38a-1cca-4565-b190-f5459b4625d6" satisfied condition "success or failure" Mar 10 21:39:24.359: INFO: Trying to get logs from node jerma-worker2 pod pod-a824e38a-1cca-4565-b190-f5459b4625d6 container test-container: STEP: delete the pod Mar 10 21:39:24.379: INFO: Waiting for pod pod-a824e38a-1cca-4565-b190-f5459b4625d6 to disappear Mar 10 21:39:24.387: INFO: Pod pod-a824e38a-1cca-4565-b190-f5459b4625d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:39:24.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9035" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2074,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:39:24.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:39:28.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2637" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":124,"skipped":2078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:39:29.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:39:29.122: INFO: Waiting up to 5m0s for pod "busybox-user-65534-76d4a57a-2110-475f-90a7-4684de1bdf54" in namespace "security-context-test-9789" to be "success or failure" Mar 10 21:39:29.146: INFO: Pod "busybox-user-65534-76d4a57a-2110-475f-90a7-4684de1bdf54": Phase="Pending", Reason="", readiness=false. Elapsed: 23.88579ms Mar 10 21:39:31.149: INFO: Pod "busybox-user-65534-76d4a57a-2110-475f-90a7-4684de1bdf54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027388425s Mar 10 21:39:31.150: INFO: Pod "busybox-user-65534-76d4a57a-2110-475f-90a7-4684de1bdf54" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:39:31.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9789" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2111,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:39:31.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-edb7ec35-8cbd-4b5d-9485-092fb8192b57 STEP: Creating secret with name s-test-opt-upd-4207e867-51e8-493c-a0fa-7a5b64a3bed6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-edb7ec35-8cbd-4b5d-9485-092fb8192b57 STEP: Updating secret s-test-opt-upd-4207e867-51e8-493c-a0fa-7a5b64a3bed6 STEP: Creating secret with name s-test-opt-create-cc0f68a5-a7af-4ebc-963d-faee71525765 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:41:01.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-565" for this suite. • [SLOW TEST:90.636 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:41:01.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 10 21:41:01.905: INFO: Waiting up to 5m0s for pod "client-containers-1b186699-d032-4dab-81f4-8e63b1ab3226" in namespace "containers-4433" to be "success or failure" Mar 10 21:41:01.921: INFO: Pod "client-containers-1b186699-d032-4dab-81f4-8e63b1ab3226": Phase="Pending", Reason="", readiness=false. Elapsed: 15.674689ms Mar 10 21:41:03.925: INFO: Pod "client-containers-1b186699-d032-4dab-81f4-8e63b1ab3226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019637883s Mar 10 21:41:05.928: INFO: Pod "client-containers-1b186699-d032-4dab-81f4-8e63b1ab3226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023537616s STEP: Saw pod success Mar 10 21:41:05.928: INFO: Pod "client-containers-1b186699-d032-4dab-81f4-8e63b1ab3226" satisfied condition "success or failure" Mar 10 21:41:05.931: INFO: Trying to get logs from node jerma-worker2 pod client-containers-1b186699-d032-4dab-81f4-8e63b1ab3226 container test-container: STEP: delete the pod Mar 10 21:41:05.988: INFO: Waiting for pod client-containers-1b186699-d032-4dab-81f4-8e63b1ab3226 to disappear Mar 10 21:41:05.998: INFO: Pod client-containers-1b186699-d032-4dab-81f4-8e63b1ab3226 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:41:05.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4433" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2160,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:41:06.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:41:06.592: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:41:09.634: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:41:09.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5471" for this suite. STEP: Destroying namespace "webhook-5471-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":128,"skipped":2163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:41:09.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 10 21:41:10.037: INFO: Waiting up to 5m0s for pod "pod-bcb3fa65-5206-4049-a9f0-aecd72f95da2" in namespace "emptydir-2783" to be "success or failure" Mar 10 21:41:10.041: INFO: Pod "pod-bcb3fa65-5206-4049-a9f0-aecd72f95da2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078448ms Mar 10 21:41:12.046: INFO: Pod "pod-bcb3fa65-5206-4049-a9f0-aecd72f95da2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008712906s STEP: Saw pod success Mar 10 21:41:12.046: INFO: Pod "pod-bcb3fa65-5206-4049-a9f0-aecd72f95da2" satisfied condition "success or failure" Mar 10 21:41:12.048: INFO: Trying to get logs from node jerma-worker2 pod pod-bcb3fa65-5206-4049-a9f0-aecd72f95da2 container test-container: STEP: delete the pod Mar 10 21:41:12.093: INFO: Waiting for pod pod-bcb3fa65-5206-4049-a9f0-aecd72f95da2 to disappear Mar 10 21:41:12.104: INFO: Pod pod-bcb3fa65-5206-4049-a9f0-aecd72f95da2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:41:12.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2783" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2187,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:41:12.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0310 21:41:22.240338 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 10 21:41:22.240: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:41:22.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9738" for this suite. • [SLOW TEST:10.133 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":130,"skipped":2205,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:41:22.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 10 21:41:22.305: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-a 07ce694e-6b3e-48ba-bd17-9ba9c088523d 678167 0 2020-03-10 21:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 10 21:41:22.305: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-a 07ce694e-6b3e-48ba-bd17-9ba9c088523d 678167 0 2020-03-10 21:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 10 21:41:32.312: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-a 07ce694e-6b3e-48ba-bd17-9ba9c088523d 678213 0 2020-03-10 21:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 10 21:41:32.312: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-a 07ce694e-6b3e-48ba-bd17-9ba9c088523d 678213 0 2020-03-10 21:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 10 21:41:42.318: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-a 07ce694e-6b3e-48ba-bd17-9ba9c088523d 678243 0 2020-03-10 21:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 10 21:41:42.319: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-a 07ce694e-6b3e-48ba-bd17-9ba9c088523d 678243 0 2020-03-10 21:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 10 21:41:52.324: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-a 07ce694e-6b3e-48ba-bd17-9ba9c088523d 678273 0 2020-03-10 21:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 10 21:41:52.324: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-a 07ce694e-6b3e-48ba-bd17-9ba9c088523d 678273 0 2020-03-10 21:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 10 21:42:02.331: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-b 9c8fc91e-d01a-4dc6-8bea-2af70c20309a 678303 0 2020-03-10 21:42:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 10 21:42:02.332: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-b 9c8fc91e-d01a-4dc6-8bea-2af70c20309a 678303 0 2020-03-10 21:42:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 10 21:42:12.335: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-b 9c8fc91e-d01a-4dc6-8bea-2af70c20309a 678333 0 2020-03-10 21:42:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 10 21:42:12.335: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9268 /api/v1/namespaces/watch-9268/configmaps/e2e-watch-test-configmap-b 9c8fc91e-d01a-4dc6-8bea-2af70c20309a 678333 0 2020-03-10 21:42:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:42:22.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9268" for this suite. • [SLOW TEST:60.097 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":131,"skipped":2212,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:42:22.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8506.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8506.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 10 21:42:26.469: INFO: DNS probes using dns-8506/dns-test-e76a13d2-eac0-4bf4-b078-d07c73df7910 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:42:26.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8506" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":132,"skipped":2232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:42:26.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:42:26.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0db04b1-9ac3-4ea2-94e8-cfe96c572ea5" in namespace "projected-7530" to be "success or failure" Mar 10 21:42:26.678: INFO: Pod "downwardapi-volume-f0db04b1-9ac3-4ea2-94e8-cfe96c572ea5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.025844ms Mar 10 21:42:28.682: INFO: Pod "downwardapi-volume-f0db04b1-9ac3-4ea2-94e8-cfe96c572ea5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.037854283s STEP: Saw pod success Mar 10 21:42:28.682: INFO: Pod "downwardapi-volume-f0db04b1-9ac3-4ea2-94e8-cfe96c572ea5" satisfied condition "success or failure" Mar 10 21:42:28.686: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f0db04b1-9ac3-4ea2-94e8-cfe96c572ea5 container client-container: STEP: delete the pod Mar 10 21:42:28.706: INFO: Waiting for pod downwardapi-volume-f0db04b1-9ac3-4ea2-94e8-cfe96c572ea5 to disappear Mar 10 21:42:28.745: INFO: Pod downwardapi-volume-f0db04b1-9ac3-4ea2-94e8-cfe96c572ea5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:42:28.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7530" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2255,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:42:28.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:42:28.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f2ecc27-54f6-4a12-8576-0e9c6601221e" in namespace "downward-api-3262" to be "success or failure" Mar 10 21:42:28.884: INFO: Pod "downwardapi-volume-1f2ecc27-54f6-4a12-8576-0e9c6601221e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.246841ms Mar 10 21:42:30.901: INFO: Pod "downwardapi-volume-1f2ecc27-54f6-4a12-8576-0e9c6601221e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024457149s STEP: Saw pod success Mar 10 21:42:30.902: INFO: Pod "downwardapi-volume-1f2ecc27-54f6-4a12-8576-0e9c6601221e" satisfied condition "success or failure" Mar 10 21:42:30.936: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1f2ecc27-54f6-4a12-8576-0e9c6601221e container client-container: STEP: delete the pod Mar 10 21:42:30.991: INFO: Waiting for pod downwardapi-volume-1f2ecc27-54f6-4a12-8576-0e9c6601221e to disappear Mar 10 21:42:30.998: INFO: Pod downwardapi-volume-1f2ecc27-54f6-4a12-8576-0e9c6601221e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:42:30.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3262" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2256,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:42:31.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 10 21:42:31.083: INFO: Waiting up to 5m0s for pod "downward-api-001dcdd9-2b8b-45b0-ac0c-a2e174e6837f" in namespace "downward-api-712" to be "success or failure" Mar 10 21:42:31.088: INFO: Pod "downward-api-001dcdd9-2b8b-45b0-ac0c-a2e174e6837f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.982815ms Mar 10 21:42:33.092: INFO: Pod "downward-api-001dcdd9-2b8b-45b0-ac0c-a2e174e6837f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009343453s STEP: Saw pod success Mar 10 21:42:33.093: INFO: Pod "downward-api-001dcdd9-2b8b-45b0-ac0c-a2e174e6837f" satisfied condition "success or failure" Mar 10 21:42:33.096: INFO: Trying to get logs from node jerma-worker pod downward-api-001dcdd9-2b8b-45b0-ac0c-a2e174e6837f container dapi-container: STEP: delete the pod Mar 10 21:42:33.142: INFO: Waiting for pod downward-api-001dcdd9-2b8b-45b0-ac0c-a2e174e6837f to disappear Mar 10 21:42:33.148: INFO: Pod downward-api-001dcdd9-2b8b-45b0-ac0c-a2e174e6837f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:42:33.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-712" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2275,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:42:33.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-5077cc2d-5de8-44e3-9245-8d1415f27763 STEP: Creating a pod to test consume configMaps Mar 10 21:42:33.235: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1d723d60-9590-4b57-a26f-c1add21142c7" in namespace "projected-7829" to be "success or failure" Mar 10 21:42:33.265: INFO: Pod "pod-projected-configmaps-1d723d60-9590-4b57-a26f-c1add21142c7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.180833ms Mar 10 21:42:35.269: INFO: Pod "pod-projected-configmaps-1d723d60-9590-4b57-a26f-c1add21142c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034010432s STEP: Saw pod success Mar 10 21:42:35.269: INFO: Pod "pod-projected-configmaps-1d723d60-9590-4b57-a26f-c1add21142c7" satisfied condition "success or failure" Mar 10 21:42:35.272: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-1d723d60-9590-4b57-a26f-c1add21142c7 container projected-configmap-volume-test: STEP: delete the pod Mar 10 21:42:35.314: INFO: Waiting for pod pod-projected-configmaps-1d723d60-9590-4b57-a26f-c1add21142c7 to disappear Mar 10 21:42:35.319: INFO: Pod pod-projected-configmaps-1d723d60-9590-4b57-a26f-c1add21142c7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:42:35.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7829" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:42:35.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 10 21:42:35.449: INFO: >>> kubeConfig: /root/.kube/config Mar 10 21:42:38.292: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:42:47.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4447" for this suite. • [SLOW TEST:12.090 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":137,"skipped":2316,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:42:47.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-199e9a26-1270-45fb-ba9d-b9255675bbd8 in namespace container-probe-4204 Mar 10 21:42:49.482: INFO: Started pod liveness-199e9a26-1270-45fb-ba9d-b9255675bbd8 in namespace container-probe-4204 STEP: checking the pod's current state and verifying that restartCount is present Mar 10 21:42:49.485: INFO: Initial restart count of pod liveness-199e9a26-1270-45fb-ba9d-b9255675bbd8 is 0 Mar 10 21:43:05.519: INFO: Restart count of pod container-probe-4204/liveness-199e9a26-1270-45fb-ba9d-b9255675bbd8 is now 1 (16.033519759s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:43:05.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4204" for this suite. • [SLOW TEST:18.181 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2317,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:43:05.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 10 21:43:05.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4602' Mar 10 21:43:07.968: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 10 21:43:07.968: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller Mar 10 21:43:07.997: INFO: scanned /root for discovery docs: Mar 10 21:43:07.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4602' Mar 10 21:43:23.831: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 10 21:43:23.831: INFO: stdout: "Created e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8\nScaling up e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 10 21:43:23.831: INFO: stdout: "Created e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8\nScaling up e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 10 21:43:23.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4602' Mar 10 21:43:23.948: INFO: stderr: "" Mar 10 21:43:23.948: INFO: stdout: "e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8-xszt7 " Mar 10 21:43:23.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8-xszt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4602' Mar 10 21:43:24.050: INFO: stderr: "" Mar 10 21:43:24.050: INFO: stdout: "true" Mar 10 21:43:24.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8-xszt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4602' Mar 10 21:43:24.157: INFO: stderr: "" Mar 10 21:43:24.157: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 10 21:43:24.157: INFO: e2e-test-httpd-rc-33581918b272dbafbdbb2021edf6eae8-xszt7 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678 Mar 10 21:43:24.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4602' Mar 10 21:43:24.235: INFO: stderr: "" Mar 10 21:43:24.235: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:43:24.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4602" for this suite. • [SLOW TEST:18.707 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":139,"skipped":2324,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:43:24.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 10 21:43:24.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9537' Mar 10 21:43:24.460: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 10 21:43:24.460: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 10 21:43:24.511: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-gqp78] Mar 10 21:43:24.512: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-gqp78" in namespace "kubectl-9537" to be "running and ready" Mar 10 21:43:24.515: INFO: Pod "e2e-test-httpd-rc-gqp78": Phase="Pending", Reason="", readiness=false. Elapsed: 3.678664ms Mar 10 21:43:26.518: INFO: Pod "e2e-test-httpd-rc-gqp78": Phase="Running", Reason="", readiness=true. Elapsed: 2.006158571s Mar 10 21:43:26.518: INFO: Pod "e2e-test-httpd-rc-gqp78" satisfied condition "running and ready" Mar 10 21:43:26.518: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-gqp78] Mar 10 21:43:26.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-9537' Mar 10 21:43:26.598: INFO: stderr: "" Mar 10 21:43:26.599: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.250. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.250. Set the 'ServerName' directive globally to suppress this message\n[Tue Mar 10 21:43:25.678147 2020] [mpm_event:notice] [pid 1:tid 139816446544744] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue Mar 10 21:43:25.678209 2020] [core:notice] [pid 1:tid 139816446544744] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 10 21:43:26.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9537' Mar 10 21:43:26.693: INFO: stderr: "" Mar 10 21:43:26.693: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:43:26.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9537" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":140,"skipped":2326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:43:26.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:43:26.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-979bced8-c057-409e-8633-00de609f369e" in namespace "downward-api-5880" to be "success or failure" Mar 10 21:43:26.768: INFO: Pod "downwardapi-volume-979bced8-c057-409e-8633-00de609f369e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.532376ms Mar 10 21:43:28.771: INFO: Pod "downwardapi-volume-979bced8-c057-409e-8633-00de609f369e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025109s STEP: Saw pod success Mar 10 21:43:28.771: INFO: Pod "downwardapi-volume-979bced8-c057-409e-8633-00de609f369e" satisfied condition "success or failure" Mar 10 21:43:28.774: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-979bced8-c057-409e-8633-00de609f369e container client-container: STEP: delete the pod Mar 10 21:43:28.795: INFO: Waiting for pod downwardapi-volume-979bced8-c057-409e-8633-00de609f369e to disappear Mar 10 21:43:28.812: INFO: Pod downwardapi-volume-979bced8-c057-409e-8633-00de609f369e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:43:28.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5880" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2356,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:43:28.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:43:29.695: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 10 21:43:31.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473409, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473409, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473409, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473409, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:43:34.768: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:43:34.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5644" for this suite. STEP: Destroying namespace "webhook-5644-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.246 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":142,"skipped":2362,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:43:35.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:43:35.148: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:43:36.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8766" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":143,"skipped":2365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:43:36.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:43:36.424: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.613585ms) Mar 10 21:43:36.427: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.882061ms) Mar 10 21:43:36.431: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.623862ms) Mar 10 21:43:36.458: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 26.971021ms) Mar 10 21:43:36.462: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.187123ms) Mar 10 21:43:36.465: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.933546ms) Mar 10 21:43:36.468: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.307294ms) Mar 10 21:43:36.471: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.012662ms) Mar 10 21:43:36.474: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.898115ms) Mar 10 21:43:36.477: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.912833ms) Mar 10 21:43:36.480: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.995503ms) Mar 10 21:43:36.483: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.745792ms) Mar 10 21:43:36.485: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.690212ms) Mar 10 21:43:36.488: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.553881ms) Mar 10 21:43:36.491: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.878921ms) Mar 10 21:43:36.494: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.906714ms) Mar 10 21:43:36.496: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.251311ms) Mar 10 21:43:36.499: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.51379ms) Mar 10 21:43:36.501: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.663231ms) Mar 10 21:43:36.504: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.484761ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:43:36.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5661" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":144,"skipped":2400,"failed":0} SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:43:36.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:43:36.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7913" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":145,"skipped":2402,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:43:36.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ddf5bd37-a5b0-464f-9c71-7dff05a3a944 STEP: Creating a pod to test consume secrets Mar 10 21:43:36.678: INFO: Waiting up to 5m0s for pod "pod-secrets-bba4425c-1046-46a5-a16a-1488be09e148" in namespace "secrets-6904" to be "success or failure" Mar 10 21:43:36.683: INFO: Pod "pod-secrets-bba4425c-1046-46a5-a16a-1488be09e148": Phase="Pending", Reason="", readiness=false. Elapsed: 4.415942ms Mar 10 21:43:38.686: INFO: Pod "pod-secrets-bba4425c-1046-46a5-a16a-1488be09e148": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007217138s Mar 10 21:43:40.690: INFO: Pod "pod-secrets-bba4425c-1046-46a5-a16a-1488be09e148": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011464919s STEP: Saw pod success Mar 10 21:43:40.690: INFO: Pod "pod-secrets-bba4425c-1046-46a5-a16a-1488be09e148" satisfied condition "success or failure" Mar 10 21:43:40.692: INFO: Trying to get logs from node jerma-worker pod pod-secrets-bba4425c-1046-46a5-a16a-1488be09e148 container secret-env-test: STEP: delete the pod Mar 10 21:43:40.714: INFO: Waiting for pod pod-secrets-bba4425c-1046-46a5-a16a-1488be09e148 to disappear Mar 10 21:43:40.719: INFO: Pod pod-secrets-bba4425c-1046-46a5-a16a-1488be09e148 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:43:40.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6904" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2413,"failed":0} S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:43:40.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:43:42.899: INFO: Waiting up to 5m0s for pod "client-envvars-f9ac1ad2-1406-4145-8a76-6ed63e7a3b8b" in namespace "pods-7770" to be "success or failure" Mar 10 21:43:42.903: INFO: Pod "client-envvars-f9ac1ad2-1406-4145-8a76-6ed63e7a3b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.781634ms Mar 10 21:43:44.907: INFO: Pod "client-envvars-f9ac1ad2-1406-4145-8a76-6ed63e7a3b8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007949539s STEP: Saw pod success Mar 10 21:43:44.907: INFO: Pod "client-envvars-f9ac1ad2-1406-4145-8a76-6ed63e7a3b8b" satisfied condition "success or failure" Mar 10 21:43:44.911: INFO: Trying to get logs from node jerma-worker pod client-envvars-f9ac1ad2-1406-4145-8a76-6ed63e7a3b8b container env3cont: STEP: delete the pod Mar 10 21:43:44.932: INFO: Waiting for pod client-envvars-f9ac1ad2-1406-4145-8a76-6ed63e7a3b8b to disappear Mar 10 21:43:44.936: INFO: Pod client-envvars-f9ac1ad2-1406-4145-8a76-6ed63e7a3b8b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:43:44.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7770" for this suite. •{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2414,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:43:44.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-9c6ce66c-7a07-4f78-bf85-5bf9c610ad70 STEP: Creating secret with name secret-projected-all-test-volume-6a1002b7-fc43-4543-a3fc-455aa37ef974 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 10 21:43:45.045: INFO: Waiting up to 5m0s for pod "projected-volume-d7d34ffe-9870-40c0-bbbf-33f4d9fc13ff" in namespace "projected-9334" to be "success or failure" Mar 10 21:43:45.049: INFO: Pod "projected-volume-d7d34ffe-9870-40c0-bbbf-33f4d9fc13ff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.275964ms Mar 10 21:43:47.067: INFO: Pod "projected-volume-d7d34ffe-9870-40c0-bbbf-33f4d9fc13ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022009782s Mar 10 21:43:49.072: INFO: Pod "projected-volume-d7d34ffe-9870-40c0-bbbf-33f4d9fc13ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026288722s STEP: Saw pod success Mar 10 21:43:49.072: INFO: Pod "projected-volume-d7d34ffe-9870-40c0-bbbf-33f4d9fc13ff" satisfied condition "success or failure" Mar 10 21:43:49.076: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-d7d34ffe-9870-40c0-bbbf-33f4d9fc13ff container projected-all-volume-test: STEP: delete the pod Mar 10 21:43:49.165: INFO: Waiting for pod projected-volume-d7d34ffe-9870-40c0-bbbf-33f4d9fc13ff to disappear Mar 10 21:43:49.175: INFO: Pod projected-volume-d7d34ffe-9870-40c0-bbbf-33f4d9fc13ff no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:43:49.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9334" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2428,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:43:49.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:44:02.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2960" for this suite. • [SLOW TEST:13.161 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":149,"skipped":2430,"failed":0} [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:44:02.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0310 21:44:03.450236 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 10 21:44:03.450: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:44:03.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3457" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":150,"skipped":2430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:44:03.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 10 21:44:03.648: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:44:03.654: INFO: Number of nodes with available pods: 0 Mar 10 21:44:03.654: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:44:04.661: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:44:04.664: INFO: Number of nodes with available pods: 0 Mar 10 21:44:04.664: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:44:05.658: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:44:05.662: INFO: Number of nodes with available pods: 2 Mar 10 21:44:05.662: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 10 21:44:05.686: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:44:05.721: INFO: Number of nodes with available pods: 1 Mar 10 21:44:05.721: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:44:06.726: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:44:06.729: INFO: Number of nodes with available pods: 1 Mar 10 21:44:06.729: INFO: Node jerma-worker is running more than one daemon pod Mar 10 21:44:07.724: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 21:44:07.727: INFO: Number of nodes with available pods: 2 Mar 10 21:44:07.727: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-17, will wait for the garbage collector to delete the pods Mar 10 21:44:07.788: INFO: Deleting DaemonSet.extensions daemon-set took: 4.852299ms Mar 10 21:44:07.888: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.20503ms Mar 10 21:44:16.091: INFO: Number of nodes with available pods: 0 Mar 10 21:44:16.091: INFO: Number of running nodes: 0, number of available pods: 0 Mar 10 21:44:16.094: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-17/daemonsets","resourceVersion":"679372"},"items":null} Mar 10 21:44:16.096: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-17/pods","resourceVersion":"679372"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:44:16.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-17" for this suite. • [SLOW TEST:12.653 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":151,"skipped":2483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:44:16.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:44:40.199: INFO: Container started at 2020-03-10 21:44:17 +0000 UTC, pod became ready at 2020-03-10 21:44:39 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:44:40.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8760" for this suite. • [SLOW TEST:24.094 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2520,"failed":0} [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:44:40.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:44:40.263: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 10 21:44:42.325: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:44:43.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4108" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":153,"skipped":2520,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:44:43.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 10 21:44:43.405: INFO: Waiting up to 5m0s for pod "pod-dbb88027-79c5-46dc-867c-0ec71a43645c" in namespace "emptydir-5339" to be "success or failure" Mar 10 21:44:43.432: INFO: Pod "pod-dbb88027-79c5-46dc-867c-0ec71a43645c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.425703ms Mar 10 21:44:45.435: INFO: Pod "pod-dbb88027-79c5-46dc-867c-0ec71a43645c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029554982s STEP: Saw pod success Mar 10 21:44:45.435: INFO: Pod "pod-dbb88027-79c5-46dc-867c-0ec71a43645c" satisfied condition "success or failure" Mar 10 21:44:45.437: INFO: Trying to get logs from node jerma-worker pod pod-dbb88027-79c5-46dc-867c-0ec71a43645c container test-container: STEP: delete the pod Mar 10 21:44:45.469: INFO: Waiting for pod pod-dbb88027-79c5-46dc-867c-0ec71a43645c to disappear Mar 10 21:44:45.527: INFO: Pod pod-dbb88027-79c5-46dc-867c-0ec71a43645c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:44:45.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5339" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2521,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:44:45.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6042 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-6042 I0310 21:44:45.786761 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6042, replica count: 2 I0310 21:44:48.837276 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 10 21:44:48.837: INFO: Creating new exec pod Mar 10 21:44:51.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6042 execpod95cjg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 10 21:44:52.183: INFO: stderr: "I0310 21:44:52.106757 1593 log.go:172] (0xc000b34000) (0xc000b4c000) Create stream\nI0310 21:44:52.106821 1593 log.go:172] (0xc000b34000) (0xc000b4c000) Stream added, broadcasting: 1\nI0310 21:44:52.108796 1593 log.go:172] (0xc000b34000) Reply frame received for 1\nI0310 21:44:52.108826 1593 log.go:172] (0xc000b34000) (0xc000ad6000) Create stream\nI0310 21:44:52.108834 1593 log.go:172] (0xc000b34000) (0xc000ad6000) Stream added, broadcasting: 3\nI0310 21:44:52.109586 1593 log.go:172] (0xc000b34000) Reply frame received for 3\nI0310 21:44:52.109606 1593 log.go:172] (0xc000b34000) (0xc000b4c320) Create stream\nI0310 21:44:52.109612 1593 log.go:172] (0xc000b34000) (0xc000b4c320) Stream added, broadcasting: 5\nI0310 21:44:52.110500 1593 log.go:172] (0xc000b34000) Reply frame received for 5\nI0310 21:44:52.176757 1593 log.go:172] (0xc000b34000) Data frame received for 5\nI0310 21:44:52.176785 1593 log.go:172] (0xc000b4c320) (5) Data frame handling\nI0310 21:44:52.176805 1593 log.go:172] (0xc000b4c320) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0310 21:44:52.177097 1593 log.go:172] (0xc000b34000) Data frame received for 5\nI0310 21:44:52.177118 1593 log.go:172] (0xc000b4c320) (5) Data frame handling\nI0310 21:44:52.177133 1593 log.go:172] (0xc000b4c320) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0310 21:44:52.177290 1593 log.go:172] (0xc000b34000) Data frame received for 3\nI0310 21:44:52.177303 1593 log.go:172] (0xc000ad6000) (3) Data frame handling\nI0310 21:44:52.177497 1593 log.go:172] (0xc000b34000) Data frame received for 5\nI0310 21:44:52.177513 1593 log.go:172] (0xc000b4c320) (5) Data frame handling\nI0310 21:44:52.179357 1593 log.go:172] (0xc000b34000) Data frame received for 1\nI0310 21:44:52.179376 1593 log.go:172] (0xc000b4c000) (1) Data frame handling\nI0310 21:44:52.179384 1593 log.go:172] (0xc000b4c000) (1) Data frame sent\nI0310 21:44:52.179416 1593 log.go:172] (0xc000b34000) (0xc000b4c000) Stream removed, broadcasting: 1\nI0310 21:44:52.179436 1593 log.go:172] (0xc000b34000) Go away received\nI0310 21:44:52.179773 1593 log.go:172] (0xc000b34000) (0xc000b4c000) Stream removed, broadcasting: 1\nI0310 21:44:52.179790 1593 log.go:172] (0xc000b34000) (0xc000ad6000) Stream removed, broadcasting: 3\nI0310 21:44:52.179798 1593 log.go:172] (0xc000b34000) (0xc000b4c320) Stream removed, broadcasting: 5\n" Mar 10 21:44:52.183: INFO: stdout: "" Mar 10 21:44:52.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6042 execpod95cjg -- /bin/sh -x -c nc -zv -t -w 2 10.109.28.172 80' Mar 10 21:44:52.396: INFO: stderr: "I0310 21:44:52.326515 1613 log.go:172] (0xc000106370) (0xc000912000) Create stream\nI0310 21:44:52.326549 1613 log.go:172] (0xc000106370) (0xc000912000) Stream added, broadcasting: 1\nI0310 21:44:52.327818 1613 log.go:172] (0xc000106370) Reply frame received for 1\nI0310 21:44:52.327835 1613 log.go:172] (0xc000106370) (0xc000912140) Create stream\nI0310 21:44:52.327840 1613 log.go:172] (0xc000106370) (0xc000912140) Stream added, broadcasting: 3\nI0310 21:44:52.328398 1613 log.go:172] (0xc000106370) Reply frame received for 3\nI0310 21:44:52.328422 1613 log.go:172] (0xc000106370) (0xc0009121e0) Create stream\nI0310 21:44:52.328434 1613 log.go:172] (0xc000106370) (0xc0009121e0) Stream added, broadcasting: 5\nI0310 21:44:52.328900 1613 log.go:172] (0xc000106370) Reply frame received for 5\nI0310 21:44:52.391744 1613 log.go:172] (0xc000106370) Data frame received for 5\nI0310 21:44:52.391767 1613 log.go:172] (0xc0009121e0) (5) Data frame handling\nI0310 21:44:52.391773 1613 log.go:172] (0xc0009121e0) (5) Data frame sent\nI0310 21:44:52.391778 1613 log.go:172] (0xc000106370) Data frame received for 5\nI0310 21:44:52.391781 1613 log.go:172] (0xc0009121e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.28.172 80\nConnection to 10.109.28.172 80 port [tcp/http] succeeded!\nI0310 21:44:52.391796 1613 log.go:172] (0xc000106370) Data frame received for 3\nI0310 21:44:52.391800 1613 log.go:172] (0xc000912140) (3) Data frame handling\nI0310 21:44:52.392892 1613 log.go:172] (0xc000106370) Data frame received for 1\nI0310 21:44:52.392909 1613 log.go:172] (0xc000912000) (1) Data frame handling\nI0310 21:44:52.392927 1613 log.go:172] (0xc000912000) (1) Data frame sent\nI0310 21:44:52.392941 1613 log.go:172] (0xc000106370) (0xc000912000) Stream removed, broadcasting: 1\nI0310 21:44:52.392958 1613 log.go:172] (0xc000106370) Go away received\nI0310 21:44:52.393382 1613 log.go:172] (0xc000106370) (0xc000912000) Stream removed, broadcasting: 1\nI0310 21:44:52.393398 1613 log.go:172] (0xc000106370) (0xc000912140) Stream removed, broadcasting: 3\nI0310 21:44:52.393407 1613 log.go:172] (0xc000106370) (0xc0009121e0) Stream removed, broadcasting: 5\n" Mar 10 21:44:52.396: INFO: stdout: "" Mar 10 21:44:52.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6042 execpod95cjg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.4 30005' Mar 10 21:44:52.564: INFO: stderr: "I0310 21:44:52.491486 1635 log.go:172] (0xc000018d10) (0xc0009f00a0) Create stream\nI0310 21:44:52.491545 1635 log.go:172] (0xc000018d10) (0xc0009f00a0) Stream added, broadcasting: 1\nI0310 21:44:52.493547 1635 log.go:172] (0xc000018d10) Reply frame received for 1\nI0310 21:44:52.493578 1635 log.go:172] (0xc000018d10) (0xc0005a2780) Create stream\nI0310 21:44:52.493589 1635 log.go:172] (0xc000018d10) (0xc0005a2780) Stream added, broadcasting: 3\nI0310 21:44:52.494255 1635 log.go:172] (0xc000018d10) Reply frame received for 3\nI0310 21:44:52.494281 1635 log.go:172] (0xc000018d10) (0xc000611b80) Create stream\nI0310 21:44:52.494295 1635 log.go:172] (0xc000018d10) (0xc000611b80) Stream added, broadcasting: 5\nI0310 21:44:52.494846 1635 log.go:172] (0xc000018d10) Reply frame received for 5\nI0310 21:44:52.559515 1635 log.go:172] (0xc000018d10) Data frame received for 5\nI0310 21:44:52.559546 1635 log.go:172] (0xc000611b80) (5) Data frame handling\nI0310 21:44:52.559561 1635 log.go:172] (0xc000611b80) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.4 30005\nI0310 21:44:52.559667 1635 log.go:172] (0xc000018d10) Data frame received for 5\nI0310 21:44:52.559683 1635 log.go:172] (0xc000611b80) (5) Data frame handling\nConnection to 172.17.0.4 30005 port [tcp/30005] succeeded!\nI0310 21:44:52.559703 1635 log.go:172] (0xc000018d10) Data frame received for 3\nI0310 21:44:52.559729 1635 log.go:172] (0xc0005a2780) (3) Data frame handling\nI0310 21:44:52.559749 1635 log.go:172] (0xc000611b80) (5) Data frame sent\nI0310 21:44:52.559759 1635 log.go:172] (0xc000018d10) Data frame received for 5\nI0310 21:44:52.559765 1635 log.go:172] (0xc000611b80) (5) Data frame handling\nI0310 21:44:52.560927 1635 log.go:172] (0xc000018d10) Data frame received for 1\nI0310 21:44:52.560940 1635 log.go:172] (0xc0009f00a0) (1) Data frame handling\nI0310 21:44:52.560949 1635 log.go:172] (0xc0009f00a0) (1) Data frame sent\nI0310 21:44:52.560962 1635 log.go:172] (0xc000018d10) (0xc0009f00a0) Stream removed, broadcasting: 1\nI0310 21:44:52.560976 1635 log.go:172] (0xc000018d10) Go away received\nI0310 21:44:52.561218 1635 log.go:172] (0xc000018d10) (0xc0009f00a0) Stream removed, broadcasting: 1\nI0310 21:44:52.561235 1635 log.go:172] (0xc000018d10) (0xc0005a2780) Stream removed, broadcasting: 3\nI0310 21:44:52.561241 1635 log.go:172] (0xc000018d10) (0xc000611b80) Stream removed, broadcasting: 5\n" Mar 10 21:44:52.564: INFO: stdout: "" Mar 10 21:44:52.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6042 execpod95cjg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.5 30005' Mar 10 21:44:52.741: INFO: stderr: "I0310 21:44:52.661130 1657 log.go:172] (0xc000a14d10) (0xc000956280) Create stream\nI0310 21:44:52.661163 1657 log.go:172] (0xc000a14d10) (0xc000956280) Stream added, broadcasting: 1\nI0310 21:44:52.667029 1657 log.go:172] (0xc000a14d10) Reply frame received for 1\nI0310 21:44:52.667058 1657 log.go:172] (0xc000a14d10) (0xc0007ddea0) Create stream\nI0310 21:44:52.667065 1657 log.go:172] (0xc000a14d10) (0xc0007ddea0) Stream added, broadcasting: 3\nI0310 21:44:52.667583 1657 log.go:172] (0xc000a14d10) Reply frame received for 3\nI0310 21:44:52.667605 1657 log.go:172] (0xc000a14d10) (0xc000956320) Create stream\nI0310 21:44:52.667615 1657 log.go:172] (0xc000a14d10) (0xc000956320) Stream added, broadcasting: 5\nI0310 21:44:52.668185 1657 log.go:172] (0xc000a14d10) Reply frame received for 5\nI0310 21:44:52.736613 1657 log.go:172] (0xc000a14d10) Data frame received for 5\nI0310 21:44:52.736640 1657 log.go:172] (0xc000956320) (5) Data frame handling\nI0310 21:44:52.736652 1657 log.go:172] (0xc000956320) (5) Data frame sent\nI0310 21:44:52.736660 1657 log.go:172] (0xc000a14d10) Data frame received for 5\nI0310 21:44:52.736667 1657 log.go:172] (0xc000956320) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.5 30005\nConnection to 172.17.0.5 30005 port [tcp/30005] succeeded!\nI0310 21:44:52.736687 1657 log.go:172] (0xc000a14d10) Data frame received for 3\nI0310 21:44:52.736695 1657 log.go:172] (0xc0007ddea0) (3) Data frame handling\nI0310 21:44:52.737719 1657 log.go:172] (0xc000a14d10) Data frame received for 1\nI0310 21:44:52.737741 1657 log.go:172] (0xc000956280) (1) Data frame handling\nI0310 21:44:52.737755 1657 log.go:172] (0xc000956280) (1) Data frame sent\nI0310 21:44:52.737774 1657 log.go:172] (0xc000a14d10) (0xc000956280) Stream removed, broadcasting: 1\nI0310 21:44:52.737789 1657 log.go:172] (0xc000a14d10) Go away received\nI0310 21:44:52.738072 1657 log.go:172] (0xc000a14d10) (0xc000956280) Stream removed, broadcasting: 1\nI0310 21:44:52.738087 1657 log.go:172] (0xc000a14d10) (0xc0007ddea0) Stream removed, broadcasting: 3\nI0310 21:44:52.738093 1657 log.go:172] (0xc000a14d10) (0xc000956320) Stream removed, broadcasting: 5\n" Mar 10 21:44:52.741: INFO: stdout: "" Mar 10 21:44:52.741: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:44:52.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6042" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.219 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":155,"skipped":2536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:44:52.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0310 21:44:58.887889 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 10 21:44:58.887: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:44:58.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4005" for this suite. • [SLOW TEST:6.088 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":156,"skipped":2580,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:44:58.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9236 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 10 21:44:58.975: INFO: Found 0 stateful pods, waiting for 3 Mar 10 21:45:09.064: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 10 21:45:09.064: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 10 21:45:09.064: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 10 21:45:09.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9236 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 10 21:45:09.289: INFO: stderr: "I0310 21:45:09.208093 1677 log.go:172] (0xc000104d10) (0xc0007079a0) Create stream\nI0310 21:45:09.208133 1677 log.go:172] (0xc000104d10) (0xc0007079a0) Stream added, broadcasting: 1\nI0310 21:45:09.209968 1677 log.go:172] (0xc000104d10) Reply frame received for 1\nI0310 21:45:09.210004 1677 log.go:172] (0xc000104d10) (0xc000a00000) Create stream\nI0310 21:45:09.210015 1677 log.go:172] (0xc000104d10) (0xc000a00000) Stream added, broadcasting: 3\nI0310 21:45:09.210620 1677 log.go:172] (0xc000104d10) Reply frame received for 3\nI0310 21:45:09.210643 1677 log.go:172] (0xc000104d10) (0xc000707b80) Create stream\nI0310 21:45:09.210649 1677 log.go:172] (0xc000104d10) (0xc000707b80) Stream added, broadcasting: 5\nI0310 21:45:09.211304 1677 log.go:172] (0xc000104d10) Reply frame received for 5\nI0310 21:45:09.250899 1677 log.go:172] (0xc000104d10) Data frame received for 5\nI0310 21:45:09.250917 1677 log.go:172] (0xc000707b80) (5) Data frame handling\nI0310 21:45:09.250927 1677 log.go:172] (0xc000707b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0310 21:45:09.285677 1677 log.go:172] (0xc000104d10) Data frame received for 5\nI0310 21:45:09.285697 1677 log.go:172] (0xc000707b80) (5) Data frame handling\nI0310 21:45:09.285711 1677 log.go:172] (0xc000104d10) Data frame received for 3\nI0310 21:45:09.285715 1677 log.go:172] (0xc000a00000) (3) Data frame handling\nI0310 21:45:09.285723 1677 log.go:172] (0xc000a00000) (3) Data frame sent\nI0310 21:45:09.285729 1677 log.go:172] (0xc000104d10) Data frame received for 3\nI0310 21:45:09.285733 1677 log.go:172] (0xc000a00000) (3) Data frame handling\nI0310 21:45:09.286576 1677 log.go:172] (0xc000104d10) Data frame received for 1\nI0310 21:45:09.286603 1677 log.go:172] (0xc0007079a0) (1) Data frame handling\nI0310 21:45:09.286614 1677 log.go:172] (0xc0007079a0) (1) Data frame sent\nI0310 21:45:09.286626 1677 log.go:172] (0xc000104d10) (0xc0007079a0) Stream removed, broadcasting: 1\nI0310 21:45:09.286640 1677 log.go:172] (0xc000104d10) Go away received\nI0310 21:45:09.286856 1677 log.go:172] (0xc000104d10) (0xc0007079a0) Stream removed, broadcasting: 1\nI0310 21:45:09.286867 1677 log.go:172] (0xc000104d10) (0xc000a00000) Stream removed, broadcasting: 3\nI0310 21:45:09.286872 1677 log.go:172] (0xc000104d10) (0xc000707b80) Stream removed, broadcasting: 5\n" Mar 10 21:45:09.289: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 10 21:45:09.289: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 10 21:45:19.339: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 10 21:45:29.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9236 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 10 21:45:29.629: INFO: stderr: "I0310 21:45:29.531019 1700 log.go:172] (0xc0006c69a0) (0xc0006b8000) Create stream\nI0310 21:45:29.531071 1700 log.go:172] (0xc0006c69a0) (0xc0006b8000) Stream added, broadcasting: 1\nI0310 21:45:29.533325 1700 log.go:172] (0xc0006c69a0) Reply frame received for 1\nI0310 21:45:29.533356 1700 log.go:172] (0xc0006c69a0) (0xc0006b80a0) Create stream\nI0310 21:45:29.533364 1700 log.go:172] (0xc0006c69a0) (0xc0006b80a0) Stream added, broadcasting: 3\nI0310 21:45:29.534086 1700 log.go:172] (0xc0006c69a0) Reply frame received for 3\nI0310 21:45:29.534143 1700 log.go:172] (0xc0006c69a0) (0xc0005e48c0) Create stream\nI0310 21:45:29.534161 1700 log.go:172] (0xc0006c69a0) (0xc0005e48c0) Stream added, broadcasting: 5\nI0310 21:45:29.534885 1700 log.go:172] (0xc0006c69a0) Reply frame received for 5\nI0310 21:45:29.625211 1700 log.go:172] (0xc0006c69a0) Data frame received for 3\nI0310 21:45:29.625288 1700 log.go:172] (0xc0006b80a0) (3) Data frame handling\nI0310 21:45:29.625303 1700 log.go:172] (0xc0006b80a0) (3) Data frame sent\nI0310 21:45:29.625314 1700 log.go:172] (0xc0006c69a0) Data frame received for 3\nI0310 21:45:29.625319 1700 log.go:172] (0xc0006b80a0) (3) Data frame handling\nI0310 21:45:29.625341 1700 log.go:172] (0xc0006c69a0) Data frame received for 5\nI0310 21:45:29.625346 1700 log.go:172] (0xc0005e48c0) (5) Data frame handling\nI0310 21:45:29.625357 1700 log.go:172] (0xc0005e48c0) (5) Data frame sent\nI0310 21:45:29.625362 1700 log.go:172] (0xc0006c69a0) Data frame received for 5\nI0310 21:45:29.625366 1700 log.go:172] (0xc0005e48c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0310 21:45:29.625467 1700 log.go:172] (0xc0006c69a0) Data frame received for 1\nI0310 21:45:29.625480 1700 log.go:172] (0xc0006b8000) (1) Data frame handling\nI0310 21:45:29.625493 1700 log.go:172] (0xc0006b8000) (1) Data frame sent\nI0310 21:45:29.625703 1700 log.go:172] (0xc0006c69a0) (0xc0006b8000) Stream removed, broadcasting: 1\nI0310 21:45:29.625739 1700 log.go:172] (0xc0006c69a0) Go away received\nI0310 21:45:29.625975 1700 log.go:172] (0xc0006c69a0) (0xc0006b8000) Stream removed, broadcasting: 1\nI0310 21:45:29.625987 1700 log.go:172] (0xc0006c69a0) (0xc0006b80a0) Stream removed, broadcasting: 3\nI0310 21:45:29.625992 1700 log.go:172] (0xc0006c69a0) (0xc0005e48c0) Stream removed, broadcasting: 5\n" Mar 10 21:45:29.629: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 10 21:45:29.629: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 10 21:45:49.674: INFO: Waiting for StatefulSet statefulset-9236/ss2 to complete update Mar 10 21:45:49.674: INFO: Waiting for Pod statefulset-9236/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 10 21:45:59.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9236 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 10 21:45:59.967: INFO: stderr: "I0310 21:45:59.868636 1720 log.go:172] (0xc0003c1130) (0xc0004d7ae0) Create stream\nI0310 21:45:59.868710 1720 log.go:172] (0xc0003c1130) (0xc0004d7ae0) Stream added, broadcasting: 1\nI0310 21:45:59.871255 1720 log.go:172] (0xc0003c1130) Reply frame received for 1\nI0310 21:45:59.871290 1720 log.go:172] (0xc0003c1130) (0xc0008040a0) Create stream\nI0310 21:45:59.871301 1720 log.go:172] (0xc0003c1130) (0xc0008040a0) Stream added, broadcasting: 3\nI0310 21:45:59.872242 1720 log.go:172] (0xc0003c1130) Reply frame received for 3\nI0310 21:45:59.872289 1720 log.go:172] (0xc0003c1130) (0xc000804140) Create stream\nI0310 21:45:59.872301 1720 log.go:172] (0xc0003c1130) (0xc000804140) Stream added, broadcasting: 5\nI0310 21:45:59.873518 1720 log.go:172] (0xc0003c1130) Reply frame received for 5\nI0310 21:45:59.938417 1720 log.go:172] (0xc0003c1130) Data frame received for 5\nI0310 21:45:59.938453 1720 log.go:172] (0xc000804140) (5) Data frame handling\nI0310 21:45:59.938466 1720 log.go:172] (0xc000804140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0310 21:45:59.961439 1720 log.go:172] (0xc0003c1130) Data frame received for 3\nI0310 21:45:59.961460 1720 log.go:172] (0xc0008040a0) (3) Data frame handling\nI0310 21:45:59.961481 1720 log.go:172] (0xc0008040a0) (3) Data frame sent\nI0310 21:45:59.961687 1720 log.go:172] (0xc0003c1130) Data frame received for 5\nI0310 21:45:59.961720 1720 log.go:172] (0xc000804140) (5) Data frame handling\nI0310 21:45:59.961742 1720 log.go:172] (0xc0003c1130) Data frame received for 3\nI0310 21:45:59.961754 1720 log.go:172] (0xc0008040a0) (3) Data frame handling\nI0310 21:45:59.963303 1720 log.go:172] (0xc0003c1130) Data frame received for 1\nI0310 21:45:59.963318 1720 log.go:172] (0xc0004d7ae0) (1) Data frame handling\nI0310 21:45:59.963326 1720 log.go:172] (0xc0004d7ae0) (1) Data frame sent\nI0310 21:45:59.963336 1720 log.go:172] (0xc0003c1130) (0xc0004d7ae0) Stream removed, broadcasting: 1\nI0310 21:45:59.963505 1720 log.go:172] (0xc0003c1130) Go away received\nI0310 21:45:59.963642 1720 log.go:172] (0xc0003c1130) (0xc0004d7ae0) Stream removed, broadcasting: 1\nI0310 21:45:59.963657 1720 log.go:172] (0xc0003c1130) (0xc0008040a0) Stream removed, broadcasting: 3\nI0310 21:45:59.963665 1720 log.go:172] (0xc0003c1130) (0xc000804140) Stream removed, broadcasting: 5\n" Mar 10 21:45:59.967: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 10 21:45:59.967: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 10 21:46:10.024: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 10 21:46:20.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9236 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 10 21:46:20.376: INFO: stderr: "I0310 21:46:20.317739 1741 log.go:172] (0xc0009a0160) (0xc000850e60) Create stream\nI0310 21:46:20.317777 1741 log.go:172] (0xc0009a0160) (0xc000850e60) Stream added, broadcasting: 1\nI0310 21:46:20.319211 1741 log.go:172] (0xc0009a0160) Reply frame received for 1\nI0310 21:46:20.319240 1741 log.go:172] (0xc0009a0160) (0xc0006fe000) Create stream\nI0310 21:46:20.319249 1741 log.go:172] (0xc0009a0160) (0xc0006fe000) Stream added, broadcasting: 3\nI0310 21:46:20.320469 1741 log.go:172] (0xc0009a0160) Reply frame received for 3\nI0310 21:46:20.320537 1741 log.go:172] (0xc0009a0160) (0xc0009c8000) Create stream\nI0310 21:46:20.320553 1741 log.go:172] (0xc0009a0160) (0xc0009c8000) Stream added, broadcasting: 5\nI0310 21:46:20.321144 1741 log.go:172] (0xc0009a0160) Reply frame received for 5\nI0310 21:46:20.371483 1741 log.go:172] (0xc0009a0160) Data frame received for 3\nI0310 21:46:20.371508 1741 log.go:172] (0xc0006fe000) (3) Data frame handling\nI0310 21:46:20.371514 1741 log.go:172] (0xc0006fe000) (3) Data frame sent\nI0310 21:46:20.371541 1741 log.go:172] (0xc0009a0160) Data frame received for 3\nI0310 21:46:20.371548 1741 log.go:172] (0xc0006fe000) (3) Data frame handling\nI0310 21:46:20.371561 1741 log.go:172] (0xc0009a0160) Data frame received for 5\nI0310 21:46:20.371565 1741 log.go:172] (0xc0009c8000) (5) Data frame handling\nI0310 21:46:20.371571 1741 log.go:172] (0xc0009c8000) (5) Data frame sent\nI0310 21:46:20.371575 1741 log.go:172] (0xc0009a0160) Data frame received for 5\nI0310 21:46:20.371578 1741 log.go:172] (0xc0009c8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0310 21:46:20.372534 1741 log.go:172] (0xc0009a0160) Data frame received for 1\nI0310 21:46:20.372550 1741 log.go:172] (0xc000850e60) (1) Data frame handling\nI0310 21:46:20.372559 1741 log.go:172] (0xc000850e60) (1) Data frame sent\nI0310 21:46:20.372569 1741 log.go:172] (0xc0009a0160) (0xc000850e60) Stream removed, broadcasting: 1\nI0310 21:46:20.372588 1741 log.go:172] (0xc0009a0160) Go away received\nI0310 21:46:20.372788 1741 log.go:172] (0xc0009a0160) (0xc000850e60) Stream removed, broadcasting: 1\nI0310 21:46:20.372799 1741 log.go:172] (0xc0009a0160) (0xc0006fe000) Stream removed, broadcasting: 3\nI0310 21:46:20.372806 1741 log.go:172] (0xc0009a0160) (0xc0009c8000) Stream removed, broadcasting: 5\n" Mar 10 21:46:20.376: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 10 21:46:20.376: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 10 21:46:40.392: INFO: Waiting for StatefulSet statefulset-9236/ss2 to complete update Mar 10 21:46:40.392: INFO: Waiting for Pod statefulset-9236/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 10 21:46:50.400: INFO: Deleting all statefulset in ns statefulset-9236 Mar 10 21:46:50.403: INFO: Scaling statefulset ss2 to 0 Mar 10 21:47:20.429: INFO: Waiting for statefulset status.replicas updated to 0 Mar 10 21:47:20.431: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:47:20.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9236" for this suite. • [SLOW TEST:141.562 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":157,"skipped":2587,"failed":0} SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:47:20.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-3804 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3804 to expose endpoints map[] Mar 10 21:47:20.637: INFO: Get endpoints failed (29.874556ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 10 21:47:21.640: INFO: successfully validated that service endpoint-test2 in namespace services-3804 exposes endpoints map[] (1.033206361s elapsed) STEP: Creating pod pod1 in namespace services-3804 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3804 to expose endpoints map[pod1:[80]] Mar 10 21:47:23.682: INFO: successfully validated that service endpoint-test2 in namespace services-3804 exposes endpoints map[pod1:[80]] (2.034800078s elapsed) STEP: Creating pod pod2 in namespace services-3804 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3804 to expose endpoints map[pod1:[80] pod2:[80]] Mar 10 21:47:25.848: INFO: successfully validated that service endpoint-test2 in namespace services-3804 exposes endpoints map[pod1:[80] pod2:[80]] (2.16287491s elapsed) STEP: Deleting pod pod1 in namespace services-3804 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3804 to expose endpoints map[pod2:[80]] Mar 10 21:47:25.894: INFO: successfully validated that service endpoint-test2 in namespace services-3804 exposes endpoints map[pod2:[80]] (29.035162ms elapsed) STEP: Deleting pod pod2 in namespace services-3804 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3804 to expose endpoints map[] Mar 10 21:47:25.986: INFO: successfully validated that service endpoint-test2 in namespace services-3804 exposes endpoints map[] (84.538203ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:47:26.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3804" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:5.619 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":158,"skipped":2591,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:47:26.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 10 21:47:26.172: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:47:26.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4605" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":159,"skipped":2596,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:47:26.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6612.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6612.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6612.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6612.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 228.172.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.172.228_udp@PTR;check="$$(dig +tcp +noall +answer +search 228.172.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.172.228_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6612.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6612.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6612.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6612.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6612.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 228.172.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.172.228_udp@PTR;check="$$(dig +tcp +noall +answer +search 228.172.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.172.228_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 10 21:47:30.532: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:30.535: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:30.538: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:30.541: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:30.586: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:30.589: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:30.592: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:30.595: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:30.614: INFO: Lookups using dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58 failed for: [wheezy_udp@dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_udp@dns-test-service.dns-6612.svc.cluster.local jessie_tcp@dns-test-service.dns-6612.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local] Mar 10 21:47:35.618: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:35.622: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:35.626: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:35.629: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:35.650: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:35.653: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:35.656: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:35.659: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:35.677: INFO: Lookups using dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58 failed for: [wheezy_udp@dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_udp@dns-test-service.dns-6612.svc.cluster.local jessie_tcp@dns-test-service.dns-6612.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local] Mar 10 21:47:40.618: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:40.622: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:40.625: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:40.628: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:40.647: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:40.649: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:40.651: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:40.654: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:40.668: INFO: Lookups using dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58 failed for: [wheezy_udp@dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_udp@dns-test-service.dns-6612.svc.cluster.local jessie_tcp@dns-test-service.dns-6612.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local] Mar 10 21:47:45.619: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:45.621: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:45.625: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:45.627: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:45.644: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:45.646: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:45.648: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:45.650: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:45.663: INFO: Lookups using dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58 failed for: [wheezy_udp@dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_udp@dns-test-service.dns-6612.svc.cluster.local jessie_tcp@dns-test-service.dns-6612.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local] Mar 10 21:47:50.625: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:50.628: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:50.632: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:50.635: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:50.670: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:50.672: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:50.675: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:50.677: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:50.692: INFO: Lookups using dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58 failed for: [wheezy_udp@dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_udp@dns-test-service.dns-6612.svc.cluster.local jessie_tcp@dns-test-service.dns-6612.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local] Mar 10 21:47:55.618: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:55.620: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:55.622: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:55.625: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:55.640: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:55.643: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:55.645: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:55.648: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local from pod dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58: the server could not find the requested resource (get pods dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58) Mar 10 21:47:55.662: INFO: Lookups using dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58 failed for: [wheezy_udp@dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@dns-test-service.dns-6612.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_udp@dns-test-service.dns-6612.svc.cluster.local jessie_tcp@dns-test-service.dns-6612.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc.cluster.local] Mar 10 21:48:00.683: INFO: DNS probes using dns-6612/dns-test-249a3ee7-d4c6-47ec-866f-da8a8c3b5e58 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:48:00.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6612" for this suite. • [SLOW TEST:34.763 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":160,"skipped":2610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:48:01.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-be0d1f61-e0ea-43a5-bbac-74a97177ac47 STEP: Creating a pod to test consume secrets Mar 10 21:48:01.137: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8d7ab855-103c-4927-b0cd-61d6d7c86245" in namespace "projected-4189" to be "success or failure" Mar 10 21:48:01.140: INFO: Pod "pod-projected-secrets-8d7ab855-103c-4927-b0cd-61d6d7c86245": Phase="Pending", Reason="", readiness=false. Elapsed: 3.085175ms Mar 10 21:48:03.142: INFO: Pod "pod-projected-secrets-8d7ab855-103c-4927-b0cd-61d6d7c86245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005710718s STEP: Saw pod success Mar 10 21:48:03.142: INFO: Pod "pod-projected-secrets-8d7ab855-103c-4927-b0cd-61d6d7c86245" satisfied condition "success or failure" Mar 10 21:48:03.145: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-8d7ab855-103c-4927-b0cd-61d6d7c86245 container projected-secret-volume-test: STEP: delete the pod Mar 10 21:48:03.197: INFO: Waiting for pod pod-projected-secrets-8d7ab855-103c-4927-b0cd-61d6d7c86245 to disappear Mar 10 21:48:03.206: INFO: Pod pod-projected-secrets-8d7ab855-103c-4927-b0cd-61d6d7c86245 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:48:03.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4189" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2643,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:48:03.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:48:03.268: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67dd86c5-4d63-4170-a97a-25e37e4c98c8" in namespace "downward-api-3399" to be "success or failure" Mar 10 21:48:03.290: INFO: Pod "downwardapi-volume-67dd86c5-4d63-4170-a97a-25e37e4c98c8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.491015ms Mar 10 21:48:05.294: INFO: Pod "downwardapi-volume-67dd86c5-4d63-4170-a97a-25e37e4c98c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026061512s STEP: Saw pod success Mar 10 21:48:05.294: INFO: Pod "downwardapi-volume-67dd86c5-4d63-4170-a97a-25e37e4c98c8" satisfied condition "success or failure" Mar 10 21:48:05.297: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-67dd86c5-4d63-4170-a97a-25e37e4c98c8 container client-container: STEP: delete the pod Mar 10 21:48:05.390: INFO: Waiting for pod downwardapi-volume-67dd86c5-4d63-4170-a97a-25e37e4c98c8 to disappear Mar 10 21:48:05.398: INFO: Pod downwardapi-volume-67dd86c5-4d63-4170-a97a-25e37e4c98c8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:48:05.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3399" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2651,"failed":0} SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:48:05.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:48:05.446: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3433 I0310 21:48:05.460155 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3433, replica count: 1 I0310 21:48:06.510580 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0310 21:48:07.510823 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 10 21:48:07.642: INFO: Created: latency-svc-4mwbf Mar 10 21:48:07.677: INFO: Got endpoints: latency-svc-4mwbf [66.922008ms] Mar 10 21:48:07.719: INFO: Created: latency-svc-hhf69 Mar 10 21:48:07.729: INFO: Got endpoints: latency-svc-hhf69 [51.870492ms] Mar 10 21:48:07.755: INFO: Created: latency-svc-bj8lp Mar 10 21:48:07.807: INFO: Got endpoints: latency-svc-bj8lp [129.467473ms] Mar 10 21:48:07.840: INFO: Created: latency-svc-hbzbq Mar 10 21:48:07.870: INFO: Got endpoints: latency-svc-hbzbq [192.226738ms] Mar 10 21:48:07.895: INFO: Created: latency-svc-z8xj5 Mar 10 21:48:07.898: INFO: Got endpoints: latency-svc-z8xj5 [220.098142ms] Mar 10 21:48:07.938: INFO: Created: latency-svc-8krxs Mar 10 21:48:07.964: INFO: Got endpoints: latency-svc-8krxs [285.709563ms] Mar 10 21:48:07.988: INFO: Created: latency-svc-smms7 Mar 10 21:48:07.995: INFO: Got endpoints: latency-svc-smms7 [316.827031ms] Mar 10 21:48:08.024: INFO: Created: latency-svc-zfxr9 Mar 10 21:48:08.031: INFO: Got endpoints: latency-svc-zfxr9 [353.358578ms] Mar 10 21:48:08.088: INFO: Created: latency-svc-fs8nl Mar 10 21:48:08.090: INFO: Got endpoints: latency-svc-fs8nl [412.854736ms] Mar 10 21:48:08.122: INFO: Created: latency-svc-59d4s Mar 10 21:48:08.127: INFO: Got endpoints: latency-svc-59d4s [448.690113ms] Mar 10 21:48:08.152: INFO: Created: latency-svc-4lnb5 Mar 10 21:48:08.157: INFO: Got endpoints: latency-svc-4lnb5 [478.95265ms] Mar 10 21:48:08.176: INFO: Created: latency-svc-6mwzn Mar 10 21:48:08.182: INFO: Got endpoints: latency-svc-6mwzn [503.467919ms] Mar 10 21:48:08.228: INFO: Created: latency-svc-jdptc Mar 10 21:48:08.236: INFO: Got endpoints: latency-svc-jdptc [557.96979ms] Mar 10 21:48:08.266: INFO: Created: latency-svc-6pc6q Mar 10 21:48:08.284: INFO: Got endpoints: latency-svc-6pc6q [606.266221ms] Mar 10 21:48:08.351: INFO: Created: latency-svc-qrdwt Mar 10 21:48:08.354: INFO: Got endpoints: latency-svc-qrdwt [676.38494ms] Mar 10 21:48:08.404: INFO: Created: latency-svc-rjnns Mar 10 21:48:08.424: INFO: Got endpoints: latency-svc-rjnns [745.987248ms] Mar 10 21:48:08.489: INFO: Created: latency-svc-7598s Mar 10 21:48:08.495: INFO: Got endpoints: latency-svc-7598s [765.745393ms] Mar 10 21:48:08.530: INFO: Created: latency-svc-4llxw Mar 10 21:48:08.538: INFO: Got endpoints: latency-svc-4llxw [731.056513ms] Mar 10 21:48:08.572: INFO: Created: latency-svc-h4szk Mar 10 21:48:08.580: INFO: Got endpoints: latency-svc-h4szk [710.047417ms] Mar 10 21:48:08.639: INFO: Created: latency-svc-r4q5c Mar 10 21:48:08.666: INFO: Got endpoints: latency-svc-r4q5c [768.441405ms] Mar 10 21:48:08.666: INFO: Created: latency-svc-kpshp Mar 10 21:48:08.671: INFO: Got endpoints: latency-svc-kpshp [707.595574ms] Mar 10 21:48:08.698: INFO: Created: latency-svc-7ng4s Mar 10 21:48:08.720: INFO: Got endpoints: latency-svc-7ng4s [724.782087ms] Mar 10 21:48:08.771: INFO: Created: latency-svc-fwbjm Mar 10 21:48:08.830: INFO: Got endpoints: latency-svc-fwbjm [799.094789ms] Mar 10 21:48:08.864: INFO: Created: latency-svc-z6n64 Mar 10 21:48:08.920: INFO: Got endpoints: latency-svc-z6n64 [829.344409ms] Mar 10 21:48:08.950: INFO: Created: latency-svc-9sh5x Mar 10 21:48:08.961: INFO: Got endpoints: latency-svc-9sh5x [834.119338ms] Mar 10 21:48:08.986: INFO: Created: latency-svc-jt8ws Mar 10 21:48:08.997: INFO: Got endpoints: latency-svc-jt8ws [839.690937ms] Mar 10 21:48:09.040: INFO: Created: latency-svc-9lgfk Mar 10 21:48:09.051: INFO: Got endpoints: latency-svc-9lgfk [869.234011ms] Mar 10 21:48:09.081: INFO: Created: latency-svc-sr6qr Mar 10 21:48:09.088: INFO: Got endpoints: latency-svc-sr6qr [851.392897ms] Mar 10 21:48:09.110: INFO: Created: latency-svc-8vc7z Mar 10 21:48:09.136: INFO: Got endpoints: latency-svc-8vc7z [851.786695ms] Mar 10 21:48:09.184: INFO: Created: latency-svc-7p2zg Mar 10 21:48:09.190: INFO: Got endpoints: latency-svc-7p2zg [836.02251ms] Mar 10 21:48:09.220: INFO: Created: latency-svc-2d9qq Mar 10 21:48:09.239: INFO: Got endpoints: latency-svc-2d9qq [814.742013ms] Mar 10 21:48:09.322: INFO: Created: latency-svc-gswc9 Mar 10 21:48:09.351: INFO: Created: latency-svc-vlzs9 Mar 10 21:48:09.351: INFO: Got endpoints: latency-svc-gswc9 [855.558693ms] Mar 10 21:48:09.374: INFO: Got endpoints: latency-svc-vlzs9 [835.792413ms] Mar 10 21:48:09.400: INFO: Created: latency-svc-mn7zl Mar 10 21:48:09.453: INFO: Got endpoints: latency-svc-mn7zl [872.959999ms] Mar 10 21:48:09.466: INFO: Created: latency-svc-22njv Mar 10 21:48:09.475: INFO: Got endpoints: latency-svc-22njv [808.351697ms] Mar 10 21:48:09.500: INFO: Created: latency-svc-8t47f Mar 10 21:48:09.510: INFO: Got endpoints: latency-svc-8t47f [839.077198ms] Mar 10 21:48:09.536: INFO: Created: latency-svc-jjqpt Mar 10 21:48:09.547: INFO: Got endpoints: latency-svc-jjqpt [827.570501ms] Mar 10 21:48:09.603: INFO: Created: latency-svc-5j78l Mar 10 21:48:09.613: INFO: Got endpoints: latency-svc-5j78l [782.842699ms] Mar 10 21:48:09.640: INFO: Created: latency-svc-dsg7f Mar 10 21:48:09.643: INFO: Got endpoints: latency-svc-dsg7f [723.578971ms] Mar 10 21:48:09.676: INFO: Created: latency-svc-hm287 Mar 10 21:48:09.680: INFO: Got endpoints: latency-svc-hm287 [718.86702ms] Mar 10 21:48:09.747: INFO: Created: latency-svc-6wcd9 Mar 10 21:48:09.749: INFO: Got endpoints: latency-svc-6wcd9 [752.117096ms] Mar 10 21:48:09.788: INFO: Created: latency-svc-4dm62 Mar 10 21:48:09.794: INFO: Got endpoints: latency-svc-4dm62 [743.650905ms] Mar 10 21:48:09.821: INFO: Created: latency-svc-4tdxt Mar 10 21:48:09.832: INFO: Got endpoints: latency-svc-4tdxt [744.398548ms] Mar 10 21:48:09.891: INFO: Created: latency-svc-2pwhg Mar 10 21:48:09.893: INFO: Got endpoints: latency-svc-2pwhg [757.00905ms] Mar 10 21:48:09.937: INFO: Created: latency-svc-pjmhm Mar 10 21:48:09.947: INFO: Got endpoints: latency-svc-pjmhm [756.617495ms] Mar 10 21:48:09.968: INFO: Created: latency-svc-v8kjd Mar 10 21:48:09.976: INFO: Got endpoints: latency-svc-v8kjd [737.460759ms] Mar 10 21:48:10.034: INFO: Created: latency-svc-9llqq Mar 10 21:48:10.052: INFO: Got endpoints: latency-svc-9llqq [701.242275ms] Mar 10 21:48:10.096: INFO: Created: latency-svc-rm7p6 Mar 10 21:48:10.099: INFO: Got endpoints: latency-svc-rm7p6 [724.272929ms] Mar 10 21:48:10.126: INFO: Created: latency-svc-gmhg8 Mar 10 21:48:10.172: INFO: Got endpoints: latency-svc-gmhg8 [719.005548ms] Mar 10 21:48:10.172: INFO: Created: latency-svc-9n7s9 Mar 10 21:48:10.182: INFO: Got endpoints: latency-svc-9n7s9 [707.163332ms] Mar 10 21:48:10.209: INFO: Created: latency-svc-7bljk Mar 10 21:48:10.219: INFO: Got endpoints: latency-svc-7bljk [708.576664ms] Mar 10 21:48:10.250: INFO: Created: latency-svc-kwh7k Mar 10 21:48:10.261: INFO: Got endpoints: latency-svc-kwh7k [713.525146ms] Mar 10 21:48:10.312: INFO: Created: latency-svc-srlbn Mar 10 21:48:10.351: INFO: Got endpoints: latency-svc-srlbn [738.494845ms] Mar 10 21:48:10.376: INFO: Created: latency-svc-85htt Mar 10 21:48:10.382: INFO: Got endpoints: latency-svc-85htt [738.367756ms] Mar 10 21:48:10.453: INFO: Created: latency-svc-4xsxn Mar 10 21:48:10.468: INFO: Got endpoints: latency-svc-4xsxn [787.818084ms] Mar 10 21:48:10.492: INFO: Created: latency-svc-pfbhb Mar 10 21:48:10.516: INFO: Got endpoints: latency-svc-pfbhb [766.703074ms] Mar 10 21:48:10.544: INFO: Created: latency-svc-tl5vq Mar 10 21:48:10.597: INFO: Got endpoints: latency-svc-tl5vq [802.154799ms] Mar 10 21:48:10.641: INFO: Created: latency-svc-6lgkb Mar 10 21:48:10.655: INFO: Got endpoints: latency-svc-6lgkb [822.825671ms] Mar 10 21:48:10.678: INFO: Created: latency-svc-dzxgv Mar 10 21:48:10.683: INFO: Got endpoints: latency-svc-dzxgv [790.128728ms] Mar 10 21:48:10.753: INFO: Created: latency-svc-msrbt Mar 10 21:48:10.762: INFO: Got endpoints: latency-svc-msrbt [815.132885ms] Mar 10 21:48:10.791: INFO: Created: latency-svc-k7bhg Mar 10 21:48:10.806: INFO: Got endpoints: latency-svc-k7bhg [829.373707ms] Mar 10 21:48:10.834: INFO: Created: latency-svc-f7lx5 Mar 10 21:48:10.909: INFO: Got endpoints: latency-svc-f7lx5 [856.558324ms] Mar 10 21:48:10.942: INFO: Created: latency-svc-76t7s Mar 10 21:48:10.949: INFO: Got endpoints: latency-svc-76t7s [850.039489ms] Mar 10 21:48:10.995: INFO: Created: latency-svc-x5pwj Mar 10 21:48:11.004: INFO: Got endpoints: latency-svc-x5pwj [831.29098ms] Mar 10 21:48:11.052: INFO: Created: latency-svc-x6zlg Mar 10 21:48:11.059: INFO: Got endpoints: latency-svc-x6zlg [876.729397ms] Mar 10 21:48:11.085: INFO: Created: latency-svc-8pjbs Mar 10 21:48:11.110: INFO: Got endpoints: latency-svc-8pjbs [891.037543ms] Mar 10 21:48:11.132: INFO: Created: latency-svc-vvh8r Mar 10 21:48:11.150: INFO: Got endpoints: latency-svc-vvh8r [889.378057ms] Mar 10 21:48:11.210: INFO: Created: latency-svc-qw8xw Mar 10 21:48:11.246: INFO: Got endpoints: latency-svc-qw8xw [894.531317ms] Mar 10 21:48:11.288: INFO: Created: latency-svc-7q2ms Mar 10 21:48:11.298: INFO: Got endpoints: latency-svc-7q2ms [916.297066ms] Mar 10 21:48:11.351: INFO: Created: latency-svc-b8zm8 Mar 10 21:48:11.359: INFO: Got endpoints: latency-svc-b8zm8 [891.631338ms] Mar 10 21:48:11.397: INFO: Created: latency-svc-dhprn Mar 10 21:48:11.407: INFO: Got endpoints: latency-svc-dhprn [891.552462ms] Mar 10 21:48:11.441: INFO: Created: latency-svc-72t2g Mar 10 21:48:11.451: INFO: Got endpoints: latency-svc-72t2g [853.995291ms] Mar 10 21:48:11.499: INFO: Created: latency-svc-phhbw Mar 10 21:48:11.510: INFO: Got endpoints: latency-svc-phhbw [855.228334ms] Mar 10 21:48:11.537: INFO: Created: latency-svc-7r2sw Mar 10 21:48:11.547: INFO: Got endpoints: latency-svc-7r2sw [863.217855ms] Mar 10 21:48:11.583: INFO: Created: latency-svc-mbg9m Mar 10 21:48:11.615: INFO: Got endpoints: latency-svc-mbg9m [852.73083ms] Mar 10 21:48:11.628: INFO: Created: latency-svc-zdpd7 Mar 10 21:48:11.637: INFO: Got endpoints: latency-svc-zdpd7 [831.715659ms] Mar 10 21:48:11.669: INFO: Created: latency-svc-srqf7 Mar 10 21:48:11.674: INFO: Got endpoints: latency-svc-srqf7 [765.160671ms] Mar 10 21:48:11.753: INFO: Created: latency-svc-g9d88 Mar 10 21:48:11.777: INFO: Created: latency-svc-xx7fb Mar 10 21:48:11.778: INFO: Got endpoints: latency-svc-g9d88 [829.025049ms] Mar 10 21:48:11.782: INFO: Got endpoints: latency-svc-xx7fb [778.706095ms] Mar 10 21:48:11.836: INFO: Created: latency-svc-lmckd Mar 10 21:48:11.843: INFO: Got endpoints: latency-svc-lmckd [784.057445ms] Mar 10 21:48:11.902: INFO: Created: latency-svc-5kb46 Mar 10 21:48:11.927: INFO: Got endpoints: latency-svc-5kb46 [816.8887ms] Mar 10 21:48:11.957: INFO: Created: latency-svc-r84kw Mar 10 21:48:11.963: INFO: Got endpoints: latency-svc-r84kw [813.025264ms] Mar 10 21:48:11.999: INFO: Created: latency-svc-cmf26 Mar 10 21:48:12.055: INFO: Got endpoints: latency-svc-cmf26 [809.358301ms] Mar 10 21:48:12.058: INFO: Created: latency-svc-dmgsj Mar 10 21:48:12.066: INFO: Got endpoints: latency-svc-dmgsj [767.896527ms] Mar 10 21:48:12.094: INFO: Created: latency-svc-rjfx2 Mar 10 21:48:12.119: INFO: Got endpoints: latency-svc-rjfx2 [760.046963ms] Mar 10 21:48:12.143: INFO: Created: latency-svc-d4cw8 Mar 10 21:48:12.146: INFO: Got endpoints: latency-svc-d4cw8 [738.205891ms] Mar 10 21:48:12.197: INFO: Created: latency-svc-84gw8 Mar 10 21:48:12.205: INFO: Got endpoints: latency-svc-84gw8 [754.438343ms] Mar 10 21:48:12.232: INFO: Created: latency-svc-p582j Mar 10 21:48:12.241: INFO: Got endpoints: latency-svc-p582j [731.374203ms] Mar 10 21:48:12.261: INFO: Created: latency-svc-hzkx5 Mar 10 21:48:12.334: INFO: Got endpoints: latency-svc-hzkx5 [787.171635ms] Mar 10 21:48:12.353: INFO: Created: latency-svc-qwcjg Mar 10 21:48:12.388: INFO: Got endpoints: latency-svc-qwcjg [772.52997ms] Mar 10 21:48:12.501: INFO: Created: latency-svc-slbxj Mar 10 21:48:12.508: INFO: Got endpoints: latency-svc-slbxj [870.38708ms] Mar 10 21:48:12.556: INFO: Created: latency-svc-d8pdk Mar 10 21:48:12.568: INFO: Got endpoints: latency-svc-d8pdk [894.202072ms] Mar 10 21:48:12.657: INFO: Created: latency-svc-jr28p Mar 10 21:48:12.660: INFO: Got endpoints: latency-svc-jr28p [882.012961ms] Mar 10 21:48:12.742: INFO: Created: latency-svc-hbwmh Mar 10 21:48:12.800: INFO: Got endpoints: latency-svc-hbwmh [1.017960408s] Mar 10 21:48:12.886: INFO: Created: latency-svc-p7dhk Mar 10 21:48:12.893: INFO: Got endpoints: latency-svc-p7dhk [1.050037809s] Mar 10 21:48:12.939: INFO: Created: latency-svc-spzr4 Mar 10 21:48:12.947: INFO: Got endpoints: latency-svc-spzr4 [1.020032201s] Mar 10 21:48:12.971: INFO: Created: latency-svc-qd6nb Mar 10 21:48:12.978: INFO: Got endpoints: latency-svc-qd6nb [1.014379036s] Mar 10 21:48:13.001: INFO: Created: latency-svc-9gz5p Mar 10 21:48:13.009: INFO: Got endpoints: latency-svc-9gz5p [953.095966ms] Mar 10 21:48:13.032: INFO: Created: latency-svc-dgk6n Mar 10 21:48:13.083: INFO: Got endpoints: latency-svc-dgk6n [1.016480179s] Mar 10 21:48:13.087: INFO: Created: latency-svc-7c7kp Mar 10 21:48:13.093: INFO: Got endpoints: latency-svc-7c7kp [973.862737ms] Mar 10 21:48:13.119: INFO: Created: latency-svc-wq8lc Mar 10 21:48:13.131: INFO: Got endpoints: latency-svc-wq8lc [984.991804ms] Mar 10 21:48:13.161: INFO: Created: latency-svc-4vsx2 Mar 10 21:48:13.166: INFO: Got endpoints: latency-svc-4vsx2 [960.30568ms] Mar 10 21:48:13.232: INFO: Created: latency-svc-t4wlc Mar 10 21:48:13.234: INFO: Got endpoints: latency-svc-t4wlc [992.775628ms] Mar 10 21:48:13.299: INFO: Created: latency-svc-6sjgn Mar 10 21:48:13.317: INFO: Got endpoints: latency-svc-6sjgn [983.37631ms] Mar 10 21:48:13.388: INFO: Created: latency-svc-r5z59 Mar 10 21:48:13.390: INFO: Got endpoints: latency-svc-r5z59 [1.002405041s] Mar 10 21:48:13.439: INFO: Created: latency-svc-5mrsq Mar 10 21:48:13.469: INFO: Got endpoints: latency-svc-5mrsq [961.558915ms] Mar 10 21:48:13.531: INFO: Created: latency-svc-wjmt2 Mar 10 21:48:13.534: INFO: Got endpoints: latency-svc-wjmt2 [965.701782ms] Mar 10 21:48:13.563: INFO: Created: latency-svc-2bk5h Mar 10 21:48:13.566: INFO: Got endpoints: latency-svc-2bk5h [905.888636ms] Mar 10 21:48:13.608: INFO: Created: latency-svc-n5www Mar 10 21:48:13.618: INFO: Got endpoints: latency-svc-n5www [818.036332ms] Mar 10 21:48:13.669: INFO: Created: latency-svc-2npjg Mar 10 21:48:13.671: INFO: Got endpoints: latency-svc-2npjg [778.680952ms] Mar 10 21:48:13.703: INFO: Created: latency-svc-22bkg Mar 10 21:48:13.721: INFO: Got endpoints: latency-svc-22bkg [774.122195ms] Mar 10 21:48:13.755: INFO: Created: latency-svc-bpg5g Mar 10 21:48:13.757: INFO: Got endpoints: latency-svc-bpg5g [779.545355ms] Mar 10 21:48:13.819: INFO: Created: latency-svc-lwpzt Mar 10 21:48:13.821: INFO: Got endpoints: latency-svc-lwpzt [812.378686ms] Mar 10 21:48:13.853: INFO: Created: latency-svc-v4tp8 Mar 10 21:48:13.873: INFO: Got endpoints: latency-svc-v4tp8 [790.052626ms] Mar 10 21:48:13.918: INFO: Created: latency-svc-dkzfm Mar 10 21:48:13.974: INFO: Got endpoints: latency-svc-dkzfm [880.685542ms] Mar 10 21:48:14.001: INFO: Created: latency-svc-4vsb7 Mar 10 21:48:14.011: INFO: Got endpoints: latency-svc-4vsb7 [880.104455ms] Mar 10 21:48:14.039: INFO: Created: latency-svc-8vvk7 Mar 10 21:48:14.047: INFO: Got endpoints: latency-svc-8vvk7 [881.732759ms] Mar 10 21:48:14.070: INFO: Created: latency-svc-5wnlc Mar 10 21:48:14.124: INFO: Got endpoints: latency-svc-5wnlc [889.298082ms] Mar 10 21:48:14.126: INFO: Created: latency-svc-9q48p Mar 10 21:48:14.133: INFO: Got endpoints: latency-svc-9q48p [815.331191ms] Mar 10 21:48:14.157: INFO: Created: latency-svc-2fmrl Mar 10 21:48:14.163: INFO: Got endpoints: latency-svc-2fmrl [772.46269ms] Mar 10 21:48:14.187: INFO: Created: latency-svc-25xsl Mar 10 21:48:14.194: INFO: Got endpoints: latency-svc-25xsl [724.239783ms] Mar 10 21:48:14.263: INFO: Created: latency-svc-thrjl Mar 10 21:48:14.298: INFO: Got endpoints: latency-svc-thrjl [763.539791ms] Mar 10 21:48:14.301: INFO: Created: latency-svc-7w9sh Mar 10 21:48:14.314: INFO: Got endpoints: latency-svc-7w9sh [748.072061ms] Mar 10 21:48:14.338: INFO: Created: latency-svc-xpp6k Mar 10 21:48:14.344: INFO: Got endpoints: latency-svc-xpp6k [726.001555ms] Mar 10 21:48:14.410: INFO: Created: latency-svc-s4qkb Mar 10 21:48:14.417: INFO: Got endpoints: latency-svc-s4qkb [745.228154ms] Mar 10 21:48:14.443: INFO: Created: latency-svc-2gm4v Mar 10 21:48:14.459: INFO: Got endpoints: latency-svc-2gm4v [738.01168ms] Mar 10 21:48:14.502: INFO: Created: latency-svc-kbkrv Mar 10 21:48:14.544: INFO: Got endpoints: latency-svc-kbkrv [786.210685ms] Mar 10 21:48:14.548: INFO: Created: latency-svc-tjmd8 Mar 10 21:48:14.565: INFO: Got endpoints: latency-svc-tjmd8 [744.360664ms] Mar 10 21:48:14.589: INFO: Created: latency-svc-xzjct Mar 10 21:48:14.604: INFO: Got endpoints: latency-svc-xzjct [731.428378ms] Mar 10 21:48:14.669: INFO: Created: latency-svc-fvrnm Mar 10 21:48:14.671: INFO: Got endpoints: latency-svc-fvrnm [696.775605ms] Mar 10 21:48:14.706: INFO: Created: latency-svc-b9gln Mar 10 21:48:14.737: INFO: Got endpoints: latency-svc-b9gln [725.953221ms] Mar 10 21:48:14.830: INFO: Created: latency-svc-tzg4j Mar 10 21:48:14.834: INFO: Got endpoints: latency-svc-tzg4j [786.314604ms] Mar 10 21:48:14.872: INFO: Created: latency-svc-wr985 Mar 10 21:48:14.894: INFO: Got endpoints: latency-svc-wr985 [770.009714ms] Mar 10 21:48:14.957: INFO: Created: latency-svc-jffll Mar 10 21:48:14.959: INFO: Got endpoints: latency-svc-jffll [826.499206ms] Mar 10 21:48:14.988: INFO: Created: latency-svc-zsrvl Mar 10 21:48:14.997: INFO: Got endpoints: latency-svc-zsrvl [833.913083ms] Mar 10 21:48:15.016: INFO: Created: latency-svc-njgt6 Mar 10 21:48:15.045: INFO: Created: latency-svc-kjkpc Mar 10 21:48:15.046: INFO: Got endpoints: latency-svc-njgt6 [852.345089ms] Mar 10 21:48:15.100: INFO: Got endpoints: latency-svc-kjkpc [802.429977ms] Mar 10 21:48:15.106: INFO: Created: latency-svc-ggknw Mar 10 21:48:15.118: INFO: Got endpoints: latency-svc-ggknw [804.154822ms] Mar 10 21:48:15.143: INFO: Created: latency-svc-4lp7q Mar 10 21:48:15.154: INFO: Got endpoints: latency-svc-4lp7q [809.778891ms] Mar 10 21:48:15.173: INFO: Created: latency-svc-fs4mz Mar 10 21:48:15.185: INFO: Got endpoints: latency-svc-fs4mz [768.317468ms] Mar 10 21:48:15.238: INFO: Created: latency-svc-q7nhz Mar 10 21:48:15.240: INFO: Got endpoints: latency-svc-q7nhz [780.215341ms] Mar 10 21:48:15.273: INFO: Created: latency-svc-fgtjq Mar 10 21:48:15.287: INFO: Got endpoints: latency-svc-fgtjq [743.762868ms] Mar 10 21:48:15.324: INFO: Created: latency-svc-gn8tf Mar 10 21:48:15.329: INFO: Got endpoints: latency-svc-gn8tf [764.033093ms] Mar 10 21:48:15.381: INFO: Created: latency-svc-tl7k5 Mar 10 21:48:15.384: INFO: Got endpoints: latency-svc-tl7k5 [779.538512ms] Mar 10 21:48:15.417: INFO: Created: latency-svc-qmh7p Mar 10 21:48:15.441: INFO: Got endpoints: latency-svc-qmh7p [770.131332ms] Mar 10 21:48:15.465: INFO: Created: latency-svc-97cdg Mar 10 21:48:15.469: INFO: Got endpoints: latency-svc-97cdg [731.67649ms] Mar 10 21:48:15.531: INFO: Created: latency-svc-bjw9z Mar 10 21:48:15.563: INFO: Created: latency-svc-242hv Mar 10 21:48:15.563: INFO: Got endpoints: latency-svc-bjw9z [729.221893ms] Mar 10 21:48:15.572: INFO: Got endpoints: latency-svc-242hv [678.134565ms] Mar 10 21:48:15.615: INFO: Created: latency-svc-gzvzh Mar 10 21:48:15.669: INFO: Got endpoints: latency-svc-gzvzh [709.448259ms] Mar 10 21:48:15.670: INFO: Created: latency-svc-k2kbd Mar 10 21:48:15.681: INFO: Got endpoints: latency-svc-k2kbd [684.066229ms] Mar 10 21:48:15.714: INFO: Created: latency-svc-qpgwm Mar 10 21:48:15.723: INFO: Got endpoints: latency-svc-qpgwm [677.025853ms] Mar 10 21:48:15.749: INFO: Created: latency-svc-z8q98 Mar 10 21:48:15.813: INFO: Got endpoints: latency-svc-z8q98 [713.119873ms] Mar 10 21:48:15.815: INFO: Created: latency-svc-992v8 Mar 10 21:48:15.825: INFO: Got endpoints: latency-svc-992v8 [707.217112ms] Mar 10 21:48:15.873: INFO: Created: latency-svc-nb88q Mar 10 21:48:15.875: INFO: Got endpoints: latency-svc-nb88q [720.985016ms] Mar 10 21:48:15.950: INFO: Created: latency-svc-ptl96 Mar 10 21:48:15.977: INFO: Got endpoints: latency-svc-ptl96 [792.408196ms] Mar 10 21:48:16.017: INFO: Created: latency-svc-wlmtp Mar 10 21:48:16.047: INFO: Got endpoints: latency-svc-wlmtp [807.610464ms] Mar 10 21:48:16.047: INFO: Created: latency-svc-kxlgb Mar 10 21:48:16.049: INFO: Got endpoints: latency-svc-kxlgb [761.976445ms] Mar 10 21:48:16.101: INFO: Created: latency-svc-7qfzk Mar 10 21:48:16.103: INFO: Got endpoints: latency-svc-7qfzk [773.763525ms] Mar 10 21:48:16.139: INFO: Created: latency-svc-c4bbf Mar 10 21:48:16.146: INFO: Got endpoints: latency-svc-c4bbf [762.198134ms] Mar 10 21:48:16.170: INFO: Created: latency-svc-ssjxv Mar 10 21:48:16.177: INFO: Got endpoints: latency-svc-ssjxv [735.798196ms] Mar 10 21:48:16.199: INFO: Created: latency-svc-l7swk Mar 10 21:48:16.249: INFO: Got endpoints: latency-svc-l7swk [780.714103ms] Mar 10 21:48:16.253: INFO: Created: latency-svc-lbfnl Mar 10 21:48:16.262: INFO: Got endpoints: latency-svc-lbfnl [699.303809ms] Mar 10 21:48:16.287: INFO: Created: latency-svc-2gtn6 Mar 10 21:48:16.292: INFO: Got endpoints: latency-svc-2gtn6 [719.977627ms] Mar 10 21:48:16.331: INFO: Created: latency-svc-9xln5 Mar 10 21:48:16.393: INFO: Got endpoints: latency-svc-9xln5 [724.617185ms] Mar 10 21:48:16.403: INFO: Created: latency-svc-9vpnv Mar 10 21:48:16.444: INFO: Got endpoints: latency-svc-9vpnv [763.146862ms] Mar 10 21:48:16.468: INFO: Created: latency-svc-69sg4 Mar 10 21:48:16.493: INFO: Got endpoints: latency-svc-69sg4 [769.41742ms] Mar 10 21:48:16.547: INFO: Created: latency-svc-jbwjp Mar 10 21:48:16.557: INFO: Got endpoints: latency-svc-jbwjp [744.205311ms] Mar 10 21:48:16.584: INFO: Created: latency-svc-rg9k8 Mar 10 21:48:16.611: INFO: Got endpoints: latency-svc-rg9k8 [785.480536ms] Mar 10 21:48:16.675: INFO: Created: latency-svc-s9w5s Mar 10 21:48:16.677: INFO: Got endpoints: latency-svc-s9w5s [801.763238ms] Mar 10 21:48:16.707: INFO: Created: latency-svc-dznh9 Mar 10 21:48:16.709: INFO: Got endpoints: latency-svc-dznh9 [731.363959ms] Mar 10 21:48:16.745: INFO: Created: latency-svc-xn5sz Mar 10 21:48:16.750: INFO: Got endpoints: latency-svc-xn5sz [703.062684ms] Mar 10 21:48:16.825: INFO: Created: latency-svc-r8xmg Mar 10 21:48:16.829: INFO: Got endpoints: latency-svc-r8xmg [779.318472ms] Mar 10 21:48:16.829: INFO: Created: latency-svc-qksqx Mar 10 21:48:16.835: INFO: Got endpoints: latency-svc-qksqx [732.162977ms] Mar 10 21:48:16.863: INFO: Created: latency-svc-944bk Mar 10 21:48:16.866: INFO: Got endpoints: latency-svc-944bk [720.354074ms] Mar 10 21:48:16.893: INFO: Created: latency-svc-gb6ql Mar 10 21:48:16.896: INFO: Got endpoints: latency-svc-gb6ql [719.437955ms] Mar 10 21:48:16.923: INFO: Created: latency-svc-dvsds Mar 10 21:48:16.968: INFO: Got endpoints: latency-svc-dvsds [718.444969ms] Mar 10 21:48:16.978: INFO: Created: latency-svc-zn688 Mar 10 21:48:16.988: INFO: Got endpoints: latency-svc-zn688 [725.224986ms] Mar 10 21:48:17.019: INFO: Created: latency-svc-fqtqw Mar 10 21:48:17.029: INFO: Got endpoints: latency-svc-fqtqw [737.360848ms] Mar 10 21:48:17.055: INFO: Created: latency-svc-h8nxv Mar 10 21:48:17.060: INFO: Got endpoints: latency-svc-h8nxv [666.454453ms] Mar 10 21:48:17.106: INFO: Created: latency-svc-zpnbj Mar 10 21:48:17.135: INFO: Created: latency-svc-n9btq Mar 10 21:48:17.135: INFO: Got endpoints: latency-svc-zpnbj [691.226046ms] Mar 10 21:48:17.145: INFO: Got endpoints: latency-svc-n9btq [652.083492ms] Mar 10 21:48:17.171: INFO: Created: latency-svc-xslf2 Mar 10 21:48:17.182: INFO: Got endpoints: latency-svc-xslf2 [624.103339ms] Mar 10 21:48:17.201: INFO: Created: latency-svc-gvqdj Mar 10 21:48:17.262: INFO: Got endpoints: latency-svc-gvqdj [650.761341ms] Mar 10 21:48:17.263: INFO: Created: latency-svc-46jk8 Mar 10 21:48:17.268: INFO: Got endpoints: latency-svc-46jk8 [591.304036ms] Mar 10 21:48:17.291: INFO: Created: latency-svc-g2v7j Mar 10 21:48:17.393: INFO: Got endpoints: latency-svc-g2v7j [684.118072ms] Mar 10 21:48:17.421: INFO: Created: latency-svc-xssp9 Mar 10 21:48:17.445: INFO: Got endpoints: latency-svc-xssp9 [694.945616ms] Mar 10 21:48:17.482: INFO: Created: latency-svc-zhkzb Mar 10 21:48:17.537: INFO: Got endpoints: latency-svc-zhkzb [707.9256ms] Mar 10 21:48:17.549: INFO: Created: latency-svc-cwc7p Mar 10 21:48:17.556: INFO: Got endpoints: latency-svc-cwc7p [720.16862ms] Mar 10 21:48:17.585: INFO: Created: latency-svc-rwnrp Mar 10 21:48:17.604: INFO: Got endpoints: latency-svc-rwnrp [737.384948ms] Mar 10 21:48:17.631: INFO: Created: latency-svc-v4cnz Mar 10 21:48:17.681: INFO: Got endpoints: latency-svc-v4cnz [784.322374ms] Mar 10 21:48:17.684: INFO: Created: latency-svc-mj5s9 Mar 10 21:48:17.701: INFO: Got endpoints: latency-svc-mj5s9 [732.848405ms] Mar 10 21:48:17.730: INFO: Created: latency-svc-lnctd Mar 10 21:48:17.737: INFO: Got endpoints: latency-svc-lnctd [749.613593ms] Mar 10 21:48:17.771: INFO: Created: latency-svc-6l2dg Mar 10 21:48:17.780: INFO: Got endpoints: latency-svc-6l2dg [750.326786ms] Mar 10 21:48:17.813: INFO: Created: latency-svc-mgg7g Mar 10 21:48:17.834: INFO: Got endpoints: latency-svc-mgg7g [773.901868ms] Mar 10 21:48:17.859: INFO: Created: latency-svc-2z4nj Mar 10 21:48:17.871: INFO: Got endpoints: latency-svc-2z4nj [734.474608ms] Mar 10 21:48:17.909: INFO: Created: latency-svc-6s2w7 Mar 10 21:48:17.962: INFO: Got endpoints: latency-svc-6s2w7 [817.22383ms] Mar 10 21:48:17.997: INFO: Created: latency-svc-ftq9c Mar 10 21:48:18.003: INFO: Got endpoints: latency-svc-ftq9c [821.443123ms] Mar 10 21:48:18.027: INFO: Created: latency-svc-xf9wl Mar 10 21:48:18.029: INFO: Got endpoints: latency-svc-xf9wl [767.528713ms] Mar 10 21:48:18.059: INFO: Created: latency-svc-jw8p6 Mar 10 21:48:18.118: INFO: Got endpoints: latency-svc-jw8p6 [849.138495ms] Mar 10 21:48:18.119: INFO: Created: latency-svc-h7b97 Mar 10 21:48:18.130: INFO: Got endpoints: latency-svc-h7b97 [737.314007ms] Mar 10 21:48:18.155: INFO: Created: latency-svc-pq6hc Mar 10 21:48:18.161: INFO: Got endpoints: latency-svc-pq6hc [715.294402ms] Mar 10 21:48:18.161: INFO: Latencies: [51.870492ms 129.467473ms 192.226738ms 220.098142ms 285.709563ms 316.827031ms 353.358578ms 412.854736ms 448.690113ms 478.95265ms 503.467919ms 557.96979ms 591.304036ms 606.266221ms 624.103339ms 650.761341ms 652.083492ms 666.454453ms 676.38494ms 677.025853ms 678.134565ms 684.066229ms 684.118072ms 691.226046ms 694.945616ms 696.775605ms 699.303809ms 701.242275ms 703.062684ms 707.163332ms 707.217112ms 707.595574ms 707.9256ms 708.576664ms 709.448259ms 710.047417ms 713.119873ms 713.525146ms 715.294402ms 718.444969ms 718.86702ms 719.005548ms 719.437955ms 719.977627ms 720.16862ms 720.354074ms 720.985016ms 723.578971ms 724.239783ms 724.272929ms 724.617185ms 724.782087ms 725.224986ms 725.953221ms 726.001555ms 729.221893ms 731.056513ms 731.363959ms 731.374203ms 731.428378ms 731.67649ms 732.162977ms 732.848405ms 734.474608ms 735.798196ms 737.314007ms 737.360848ms 737.384948ms 737.460759ms 738.01168ms 738.205891ms 738.367756ms 738.494845ms 743.650905ms 743.762868ms 744.205311ms 744.360664ms 744.398548ms 745.228154ms 745.987248ms 748.072061ms 749.613593ms 750.326786ms 752.117096ms 754.438343ms 756.617495ms 757.00905ms 760.046963ms 761.976445ms 762.198134ms 763.146862ms 763.539791ms 764.033093ms 765.160671ms 765.745393ms 766.703074ms 767.528713ms 767.896527ms 768.317468ms 768.441405ms 769.41742ms 770.009714ms 770.131332ms 772.46269ms 772.52997ms 773.763525ms 773.901868ms 774.122195ms 778.680952ms 778.706095ms 779.318472ms 779.538512ms 779.545355ms 780.215341ms 780.714103ms 782.842699ms 784.057445ms 784.322374ms 785.480536ms 786.210685ms 786.314604ms 787.171635ms 787.818084ms 790.052626ms 790.128728ms 792.408196ms 799.094789ms 801.763238ms 802.154799ms 802.429977ms 804.154822ms 807.610464ms 808.351697ms 809.358301ms 809.778891ms 812.378686ms 813.025264ms 814.742013ms 815.132885ms 815.331191ms 816.8887ms 817.22383ms 818.036332ms 821.443123ms 822.825671ms 826.499206ms 827.570501ms 829.025049ms 829.344409ms 829.373707ms 831.29098ms 831.715659ms 833.913083ms 834.119338ms 835.792413ms 836.02251ms 839.077198ms 839.690937ms 849.138495ms 850.039489ms 851.392897ms 851.786695ms 852.345089ms 852.73083ms 853.995291ms 855.228334ms 855.558693ms 856.558324ms 863.217855ms 869.234011ms 870.38708ms 872.959999ms 876.729397ms 880.104455ms 880.685542ms 881.732759ms 882.012961ms 889.298082ms 889.378057ms 891.037543ms 891.552462ms 891.631338ms 894.202072ms 894.531317ms 905.888636ms 916.297066ms 953.095966ms 960.30568ms 961.558915ms 965.701782ms 973.862737ms 983.37631ms 984.991804ms 992.775628ms 1.002405041s 1.014379036s 1.016480179s 1.017960408s 1.020032201s 1.050037809s] Mar 10 21:48:18.161: INFO: 50 %ile: 769.41742ms Mar 10 21:48:18.161: INFO: 90 %ile: 891.552462ms Mar 10 21:48:18.161: INFO: 99 %ile: 1.020032201s Mar 10 21:48:18.161: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:48:18.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3433" for this suite. • [SLOW TEST:12.764 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":163,"skipped":2656,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:48:18.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 10 21:48:18.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7919' Mar 10 21:48:18.464: INFO: stderr: "" Mar 10 21:48:18.464: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 10 21:48:18.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7919' Mar 10 21:48:18.589: INFO: stderr: "" Mar 10 21:48:18.589: INFO: stdout: "update-demo-nautilus-lfvtv update-demo-nautilus-wspv6 " Mar 10 21:48:18.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lfvtv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:18.714: INFO: stderr: "" Mar 10 21:48:18.714: INFO: stdout: "" Mar 10 21:48:18.714: INFO: update-demo-nautilus-lfvtv is created but not running Mar 10 21:48:23.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7919' Mar 10 21:48:23.821: INFO: stderr: "" Mar 10 21:48:23.821: INFO: stdout: "update-demo-nautilus-lfvtv update-demo-nautilus-wspv6 " Mar 10 21:48:23.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lfvtv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:23.912: INFO: stderr: "" Mar 10 21:48:23.912: INFO: stdout: "true" Mar 10 21:48:23.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lfvtv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:23.977: INFO: stderr: "" Mar 10 21:48:23.977: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 10 21:48:23.977: INFO: validating pod update-demo-nautilus-lfvtv Mar 10 21:48:23.981: INFO: got data: { "image": "nautilus.jpg" } Mar 10 21:48:23.981: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 10 21:48:23.981: INFO: update-demo-nautilus-lfvtv is verified up and running Mar 10 21:48:23.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wspv6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:24.066: INFO: stderr: "" Mar 10 21:48:24.066: INFO: stdout: "true" Mar 10 21:48:24.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wspv6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:24.134: INFO: stderr: "" Mar 10 21:48:24.134: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 10 21:48:24.134: INFO: validating pod update-demo-nautilus-wspv6 Mar 10 21:48:24.179: INFO: got data: { "image": "nautilus.jpg" } Mar 10 21:48:24.179: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 10 21:48:24.179: INFO: update-demo-nautilus-wspv6 is verified up and running STEP: scaling down the replication controller Mar 10 21:48:24.181: INFO: scanned /root for discovery docs: Mar 10 21:48:24.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7919' Mar 10 21:48:25.650: INFO: stderr: "" Mar 10 21:48:25.651: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 10 21:48:25.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7919' Mar 10 21:48:25.745: INFO: stderr: "" Mar 10 21:48:25.745: INFO: stdout: "update-demo-nautilus-lfvtv update-demo-nautilus-wspv6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 10 21:48:30.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7919' Mar 10 21:48:30.837: INFO: stderr: "" Mar 10 21:48:30.837: INFO: stdout: "update-demo-nautilus-lfvtv update-demo-nautilus-wspv6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 10 21:48:35.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7919' Mar 10 21:48:35.928: INFO: stderr: "" Mar 10 21:48:35.928: INFO: stdout: "update-demo-nautilus-lfvtv update-demo-nautilus-wspv6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 10 21:48:40.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7919' Mar 10 21:48:41.050: INFO: stderr: "" Mar 10 21:48:41.050: INFO: stdout: "update-demo-nautilus-wspv6 " Mar 10 21:48:41.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wspv6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:41.146: INFO: stderr: "" Mar 10 21:48:41.146: INFO: stdout: "true" Mar 10 21:48:41.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wspv6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:41.234: INFO: stderr: "" Mar 10 21:48:41.234: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 10 21:48:41.234: INFO: validating pod update-demo-nautilus-wspv6 Mar 10 21:48:41.236: INFO: got data: { "image": "nautilus.jpg" } Mar 10 21:48:41.236: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 10 21:48:41.236: INFO: update-demo-nautilus-wspv6 is verified up and running STEP: scaling up the replication controller Mar 10 21:48:41.240: INFO: scanned /root for discovery docs: Mar 10 21:48:41.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7919' Mar 10 21:48:42.373: INFO: stderr: "" Mar 10 21:48:42.373: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 10 21:48:42.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7919' Mar 10 21:48:42.452: INFO: stderr: "" Mar 10 21:48:42.452: INFO: stdout: "update-demo-nautilus-kvdxv update-demo-nautilus-wspv6 " Mar 10 21:48:42.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvdxv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:42.537: INFO: stderr: "" Mar 10 21:48:42.537: INFO: stdout: "" Mar 10 21:48:42.537: INFO: update-demo-nautilus-kvdxv is created but not running Mar 10 21:48:47.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7919' Mar 10 21:48:47.640: INFO: stderr: "" Mar 10 21:48:47.640: INFO: stdout: "update-demo-nautilus-kvdxv update-demo-nautilus-wspv6 " Mar 10 21:48:47.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvdxv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:47.716: INFO: stderr: "" Mar 10 21:48:47.716: INFO: stdout: "true" Mar 10 21:48:47.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvdxv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:47.802: INFO: stderr: "" Mar 10 21:48:47.802: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 10 21:48:47.802: INFO: validating pod update-demo-nautilus-kvdxv Mar 10 21:48:47.806: INFO: got data: { "image": "nautilus.jpg" } Mar 10 21:48:47.806: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 10 21:48:47.806: INFO: update-demo-nautilus-kvdxv is verified up and running Mar 10 21:48:47.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wspv6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:47.888: INFO: stderr: "" Mar 10 21:48:47.888: INFO: stdout: "true" Mar 10 21:48:47.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wspv6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7919' Mar 10 21:48:47.960: INFO: stderr: "" Mar 10 21:48:47.960: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 10 21:48:47.960: INFO: validating pod update-demo-nautilus-wspv6 Mar 10 21:48:47.962: INFO: got data: { "image": "nautilus.jpg" } Mar 10 21:48:47.962: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 10 21:48:47.962: INFO: update-demo-nautilus-wspv6 is verified up and running STEP: using delete to clean up resources Mar 10 21:48:47.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7919' Mar 10 21:48:48.042: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 10 21:48:48.042: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 10 21:48:48.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7919' Mar 10 21:48:48.117: INFO: stderr: "No resources found in kubectl-7919 namespace.\n" Mar 10 21:48:48.117: INFO: stdout: "" Mar 10 21:48:48.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7919 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 10 21:48:48.180: INFO: stderr: "" Mar 10 21:48:48.180: INFO: stdout: "update-demo-nautilus-kvdxv\nupdate-demo-nautilus-wspv6\n" Mar 10 21:48:48.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7919' Mar 10 21:48:48.801: INFO: stderr: "No resources found in kubectl-7919 namespace.\n" Mar 10 21:48:48.801: INFO: stdout: "" Mar 10 21:48:48.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7919 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 10 21:48:48.884: INFO: stderr: "" Mar 10 21:48:48.884: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:48:48.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7919" for this suite. • [SLOW TEST:30.720 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":164,"skipped":2657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:48:48.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 10 21:48:49.018: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 10 21:48:49.028: INFO: Waiting for terminating namespaces to be deleted... Mar 10 21:48:49.030: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 10 21:48:49.034: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:48:49.034: INFO: Container kindnet-cni ready: true, restart count 0 Mar 10 21:48:49.034: INFO: update-demo-nautilus-kvdxv from kubectl-7919 started at 2020-03-10 21:48:41 +0000 UTC (1 container statuses recorded) Mar 10 21:48:49.034: INFO: Container update-demo ready: false, restart count 0 Mar 10 21:48:49.034: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:48:49.034: INFO: Container kube-proxy ready: true, restart count 0 Mar 10 21:48:49.034: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 10 21:48:49.039: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:48:49.039: INFO: Container kindnet-cni ready: true, restart count 0 Mar 10 21:48:49.039: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:48:49.039: INFO: Container kube-proxy ready: true, restart count 0 Mar 10 21:48:49.039: INFO: update-demo-nautilus-wspv6 from kubectl-7919 started at 2020-03-10 21:48:18 +0000 UTC (1 container statuses recorded) Mar 10 21:48:49.039: INFO: Container update-demo ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Mar 10 21:48:49.112: INFO: Pod kindnet-gxwrl requesting resource cpu=100m on Node jerma-worker Mar 10 21:48:49.112: INFO: Pod kindnet-x9bds requesting resource cpu=100m on Node jerma-worker2 Mar 10 21:48:49.112: INFO: Pod kube-proxy-dvgp7 requesting resource cpu=0m on Node jerma-worker Mar 10 21:48:49.112: INFO: Pod kube-proxy-xqsww requesting resource cpu=0m on Node jerma-worker2 Mar 10 21:48:49.112: INFO: Pod update-demo-nautilus-kvdxv requesting resource cpu=0m on Node jerma-worker Mar 10 21:48:49.112: INFO: Pod update-demo-nautilus-wspv6 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 10 21:48:49.112: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Mar 10 21:48:49.154: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-0b46502a-b00f-4f83-bca6-985f499ba3b1.15fb0face2fb9d63], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4784/filler-pod-0b46502a-b00f-4f83-bca6-985f499ba3b1 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-0b46502a-b00f-4f83-bca6-985f499ba3b1.15fb0fad1c4e2c01], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-0b46502a-b00f-4f83-bca6-985f499ba3b1.15fb0fad2a197b21], Reason = [Created], Message = [Created container filler-pod-0b46502a-b00f-4f83-bca6-985f499ba3b1] STEP: Considering event: Type = [Normal], Name = [filler-pod-0b46502a-b00f-4f83-bca6-985f499ba3b1.15fb0fad372022bb], Reason = [Started], Message = [Started container filler-pod-0b46502a-b00f-4f83-bca6-985f499ba3b1] STEP: Considering event: Type = [Normal], Name = [filler-pod-21a20afb-61a9-4a5e-a396-0eb74542243a.15fb0face2f5391f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4784/filler-pod-21a20afb-61a9-4a5e-a396-0eb74542243a to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-21a20afb-61a9-4a5e-a396-0eb74542243a.15fb0fad16f7249c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-21a20afb-61a9-4a5e-a396-0eb74542243a.15fb0fad245aa099], Reason = [Created], Message = [Created container filler-pod-21a20afb-61a9-4a5e-a396-0eb74542243a] STEP: Considering event: Type = [Normal], Name = [filler-pod-21a20afb-61a9-4a5e-a396-0eb74542243a.15fb0fad2ff77fa3], Reason = [Started], Message = [Started container filler-pod-21a20afb-61a9-4a5e-a396-0eb74542243a] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fb0fad5b2ae4bd], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:48:52.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4784" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":165,"skipped":2687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:48:52.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 10 21:48:52.319: INFO: Waiting up to 5m0s for pod "var-expansion-f1196412-32cf-4217-b714-8682e1a25e33" in namespace "var-expansion-566" to be "success or failure" Mar 10 21:48:52.323: INFO: Pod "var-expansion-f1196412-32cf-4217-b714-8682e1a25e33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.002712ms Mar 10 21:48:54.325: INFO: Pod "var-expansion-f1196412-32cf-4217-b714-8682e1a25e33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006185467s STEP: Saw pod success Mar 10 21:48:54.325: INFO: Pod "var-expansion-f1196412-32cf-4217-b714-8682e1a25e33" satisfied condition "success or failure" Mar 10 21:48:54.327: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-f1196412-32cf-4217-b714-8682e1a25e33 container dapi-container: STEP: delete the pod Mar 10 21:48:54.350: INFO: Waiting for pod var-expansion-f1196412-32cf-4217-b714-8682e1a25e33 to disappear Mar 10 21:48:54.354: INFO: Pod var-expansion-f1196412-32cf-4217-b714-8682e1a25e33 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:48:54.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-566" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2737,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:48:54.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362 STEP: creating the pod Mar 10 21:48:54.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-698' Mar 10 21:48:54.644: INFO: stderr: "" Mar 10 21:48:54.644: INFO: stdout: "pod/pause created\n" Mar 10 21:48:54.644: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 10 21:48:54.644: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-698" to be "running and ready" Mar 10 21:48:54.679: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 34.647049ms Mar 10 21:48:56.683: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.038722265s Mar 10 21:48:56.683: INFO: Pod "pause" satisfied condition "running and ready" Mar 10 21:48:56.683: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 10 21:48:56.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-698' Mar 10 21:48:56.795: INFO: stderr: "" Mar 10 21:48:56.795: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 10 21:48:56.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-698' Mar 10 21:48:56.883: INFO: stderr: "" Mar 10 21:48:56.883: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 10 21:48:56.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-698' Mar 10 21:48:56.960: INFO: stderr: "" Mar 10 21:48:56.960: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 10 21:48:56.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-698' Mar 10 21:48:57.033: INFO: stderr: "" Mar 10 21:48:57.033: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369 STEP: using delete to clean up resources Mar 10 21:48:57.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-698' Mar 10 21:48:57.133: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 10 21:48:57.133: INFO: stdout: "pod \"pause\" force deleted\n" Mar 10 21:48:57.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-698' Mar 10 21:48:57.214: INFO: stderr: "No resources found in kubectl-698 namespace.\n" Mar 10 21:48:57.214: INFO: stdout: "" Mar 10 21:48:57.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-698 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 10 21:48:57.319: INFO: stderr: "" Mar 10 21:48:57.319: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:48:57.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-698" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":167,"skipped":2752,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:48:57.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5355 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 10 21:48:57.560: INFO: Found 0 stateful pods, waiting for 3 Mar 10 21:49:07.564: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 10 21:49:07.564: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 10 21:49:07.565: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 10 21:49:07.591: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 10 21:49:17.659: INFO: Updating stateful set ss2 Mar 10 21:49:17.709: INFO: Waiting for Pod statefulset-5355/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 10 21:49:27.838: INFO: Found 2 stateful pods, waiting for 3 Mar 10 21:49:37.843: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 10 21:49:37.843: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 10 21:49:37.843: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 10 21:49:37.867: INFO: Updating stateful set ss2 Mar 10 21:49:37.895: INFO: Waiting for Pod statefulset-5355/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 10 21:49:47.936: INFO: Updating stateful set ss2 Mar 10 21:49:47.949: INFO: Waiting for StatefulSet statefulset-5355/ss2 to complete update Mar 10 21:49:47.949: INFO: Waiting for Pod statefulset-5355/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 10 21:49:57.958: INFO: Deleting all statefulset in ns statefulset-5355 Mar 10 21:49:57.961: INFO: Scaling statefulset ss2 to 0 Mar 10 21:50:17.983: INFO: Waiting for statefulset status.replicas updated to 0 Mar 10 21:50:17.987: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:50:18.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5355" for this suite. • [SLOW TEST:80.662 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":168,"skipped":2765,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:50:18.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-cde2cef8-9532-4ed7-b885-289b3a71e682 STEP: Creating a pod to test consume secrets Mar 10 21:50:18.183: INFO: Waiting up to 5m0s for pod "pod-secrets-71d2262b-a423-4abd-a8d8-61e9a2003ab4" in namespace "secrets-8699" to be "success or failure" Mar 10 21:50:18.207: INFO: Pod "pod-secrets-71d2262b-a423-4abd-a8d8-61e9a2003ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.028898ms Mar 10 21:50:20.211: INFO: Pod "pod-secrets-71d2262b-a423-4abd-a8d8-61e9a2003ab4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027555467s STEP: Saw pod success Mar 10 21:50:20.211: INFO: Pod "pod-secrets-71d2262b-a423-4abd-a8d8-61e9a2003ab4" satisfied condition "success or failure" Mar 10 21:50:20.213: INFO: Trying to get logs from node jerma-worker pod pod-secrets-71d2262b-a423-4abd-a8d8-61e9a2003ab4 container secret-volume-test: STEP: delete the pod Mar 10 21:50:20.262: INFO: Waiting for pod pod-secrets-71d2262b-a423-4abd-a8d8-61e9a2003ab4 to disappear Mar 10 21:50:20.270: INFO: Pod pod-secrets-71d2262b-a423-4abd-a8d8-61e9a2003ab4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:50:20.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8699" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2798,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:50:20.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8223 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-8223 Mar 10 21:50:20.367: INFO: Found 0 stateful pods, waiting for 1 Mar 10 21:50:30.372: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 10 21:50:30.424: INFO: Deleting all statefulset in ns statefulset-8223 Mar 10 21:50:30.435: INFO: Scaling statefulset ss to 0 Mar 10 21:50:40.492: INFO: Waiting for statefulset status.replicas updated to 0 Mar 10 21:50:40.495: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:50:40.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8223" for this suite. • [SLOW TEST:20.256 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":170,"skipped":2800,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:50:40.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:50:40.623: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-fdb3a0d0-59c2-497a-8f7c-0407758948af" in namespace "security-context-test-383" to be "success or failure" Mar 10 21:50:40.632: INFO: Pod "busybox-privileged-false-fdb3a0d0-59c2-497a-8f7c-0407758948af": Phase="Pending", Reason="", readiness=false. Elapsed: 9.291004ms Mar 10 21:50:42.635: INFO: Pod "busybox-privileged-false-fdb3a0d0-59c2-497a-8f7c-0407758948af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011822024s Mar 10 21:50:42.635: INFO: Pod "busybox-privileged-false-fdb3a0d0-59c2-497a-8f7c-0407758948af" satisfied condition "success or failure" Mar 10 21:50:42.645: INFO: Got logs for pod "busybox-privileged-false-fdb3a0d0-59c2-497a-8f7c-0407758948af": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:50:42.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-383" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2834,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:50:42.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 10 21:50:42.721: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 10 21:50:52.799: INFO: >>> kubeConfig: /root/.kube/config Mar 10 21:50:55.738: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:51:05.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3572" for this suite. • [SLOW TEST:23.180 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":172,"skipped":2847,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:51:05.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:51:06.754: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:51:09.802: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:51:10.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4249" for this suite. STEP: Destroying namespace "webhook-4249-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":173,"skipped":2869,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:51:10.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-087dc5ed-74c6-4d31-976e-4bc9b9dd48da STEP: Creating a pod to test consume secrets Mar 10 21:51:10.410: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c83c82bc-e19e-4ec9-9494-376b80a4eb4a" in namespace "projected-8272" to be "success or failure" Mar 10 21:51:10.440: INFO: Pod "pod-projected-secrets-c83c82bc-e19e-4ec9-9494-376b80a4eb4a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.817614ms Mar 10 21:51:12.445: INFO: Pod "pod-projected-secrets-c83c82bc-e19e-4ec9-9494-376b80a4eb4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.03495375s STEP: Saw pod success Mar 10 21:51:12.445: INFO: Pod "pod-projected-secrets-c83c82bc-e19e-4ec9-9494-376b80a4eb4a" satisfied condition "success or failure" Mar 10 21:51:12.448: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-c83c82bc-e19e-4ec9-9494-376b80a4eb4a container secret-volume-test: STEP: delete the pod Mar 10 21:51:12.546: INFO: Waiting for pod pod-projected-secrets-c83c82bc-e19e-4ec9-9494-376b80a4eb4a to disappear Mar 10 21:51:12.553: INFO: Pod pod-projected-secrets-c83c82bc-e19e-4ec9-9494-376b80a4eb4a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:51:12.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8272" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2871,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:51:12.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-e743f6e9-cb23-40dd-946e-ba8a56f38dc2 STEP: Creating a pod to test consume configMaps Mar 10 21:51:12.701: INFO: Waiting up to 5m0s for pod "pod-configmaps-531cc629-7632-4082-a5ff-d9493d1599bc" in namespace "configmap-3497" to be "success or failure" Mar 10 21:51:12.714: INFO: Pod "pod-configmaps-531cc629-7632-4082-a5ff-d9493d1599bc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.553356ms Mar 10 21:51:14.718: INFO: Pod "pod-configmaps-531cc629-7632-4082-a5ff-d9493d1599bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01616748s STEP: Saw pod success Mar 10 21:51:14.718: INFO: Pod "pod-configmaps-531cc629-7632-4082-a5ff-d9493d1599bc" satisfied condition "success or failure" Mar 10 21:51:14.720: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-531cc629-7632-4082-a5ff-d9493d1599bc container configmap-volume-test: STEP: delete the pod Mar 10 21:51:14.753: INFO: Waiting for pod pod-configmaps-531cc629-7632-4082-a5ff-d9493d1599bc to disappear Mar 10 21:51:14.760: INFO: Pod pod-configmaps-531cc629-7632-4082-a5ff-d9493d1599bc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:51:14.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3497" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2878,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:51:14.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 10 21:51:14.851: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 10 21:51:15.479: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 10 21:51:17.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473875, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473875, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473875, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473875, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 10 21:51:20.304: INFO: Waited 706.777856ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:51:20.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2822" for this suite. • [SLOW TEST:6.135 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":176,"skipped":2923,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:51:20.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5135.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5135.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 10 21:51:25.005: INFO: DNS probes using dns-test-ecf8495c-9a45-434f-8ec0-06b3d6a5e187 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5135.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5135.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 10 21:51:29.158: INFO: File wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local from pod dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 10 21:51:29.161: INFO: File jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local from pod dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 10 21:51:29.161: INFO: Lookups using dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 failed for: [wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local] Mar 10 21:51:34.165: INFO: File wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local from pod dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 10 21:51:34.167: INFO: File jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local from pod dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 contains '' instead of 'bar.example.com.' Mar 10 21:51:34.167: INFO: Lookups using dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 failed for: [wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local] Mar 10 21:51:39.165: INFO: File wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local from pod dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 10 21:51:39.169: INFO: File jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local from pod dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 10 21:51:39.169: INFO: Lookups using dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 failed for: [wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local] Mar 10 21:51:44.165: INFO: File wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local from pod dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 10 21:51:44.168: INFO: File jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local from pod dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 10 21:51:44.168: INFO: Lookups using dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 failed for: [wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local] Mar 10 21:51:49.176: INFO: File wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local from pod dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 10 21:51:49.180: INFO: File jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local from pod dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 10 21:51:49.180: INFO: Lookups using dns-5135/dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 failed for: [wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local] Mar 10 21:51:54.169: INFO: DNS probes using dns-test-c13cfb58-3696-4657-bbc9-956d990afc37 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5135.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5135.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5135.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5135.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 10 21:51:58.355: INFO: DNS probes using dns-test-c53cecd0-7a43-444b-a163-f35e632f1b90 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:51:58.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5135" for this suite. • [SLOW TEST:37.607 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":177,"skipped":2986,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:51:58.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 10 21:51:58.575: INFO: Waiting up to 5m0s for pod "pod-076335b5-c6f6-4ebf-aae8-f927b3bcd7c2" in namespace "emptydir-5712" to be "success or failure" Mar 10 21:51:58.611: INFO: Pod "pod-076335b5-c6f6-4ebf-aae8-f927b3bcd7c2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.848754ms Mar 10 21:52:00.614: INFO: Pod "pod-076335b5-c6f6-4ebf-aae8-f927b3bcd7c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039499634s STEP: Saw pod success Mar 10 21:52:00.615: INFO: Pod "pod-076335b5-c6f6-4ebf-aae8-f927b3bcd7c2" satisfied condition "success or failure" Mar 10 21:52:00.616: INFO: Trying to get logs from node jerma-worker pod pod-076335b5-c6f6-4ebf-aae8-f927b3bcd7c2 container test-container: STEP: delete the pod Mar 10 21:52:00.654: INFO: Waiting for pod pod-076335b5-c6f6-4ebf-aae8-f927b3bcd7c2 to disappear Mar 10 21:52:00.670: INFO: Pod pod-076335b5-c6f6-4ebf-aae8-f927b3bcd7c2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:52:00.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5712" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":3016,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:52:00.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:52:01.224: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 10 21:52:03.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473921, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473921, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473921, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473921, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:52:06.267: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:52:06.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5998" for this suite. STEP: Destroying namespace "webhook-5998-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.720 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":179,"skipped":3020,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:52:06.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:52:06.462: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-fa14e744-760f-4536-ad07-a05e1efd4ea1" in namespace "security-context-test-6299" to be "success or failure" Mar 10 21:52:06.485: INFO: Pod "busybox-readonly-false-fa14e744-760f-4536-ad07-a05e1efd4ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 23.156349ms Mar 10 21:52:08.488: INFO: Pod "busybox-readonly-false-fa14e744-760f-4536-ad07-a05e1efd4ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02652956s Mar 10 21:52:08.488: INFO: Pod "busybox-readonly-false-fa14e744-760f-4536-ad07-a05e1efd4ea1" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:52:08.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6299" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3021,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:52:08.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 10 21:52:11.124: INFO: Successfully updated pod "pod-update-36f15b5e-e8e8-4f46-b19b-55a02ed6d6d8" STEP: verifying the updated pod is in kubernetes Mar 10 21:52:11.152: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:52:11.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2120" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3025,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:52:11.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:52:11.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be0f4849-b25b-4051-bcad-7c7ea99aae2d" in namespace "downward-api-5832" to be "success or failure" Mar 10 21:52:11.272: INFO: Pod "downwardapi-volume-be0f4849-b25b-4051-bcad-7c7ea99aae2d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.820928ms Mar 10 21:52:13.275: INFO: Pod "downwardapi-volume-be0f4849-b25b-4051-bcad-7c7ea99aae2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039355814s STEP: Saw pod success Mar 10 21:52:13.275: INFO: Pod "downwardapi-volume-be0f4849-b25b-4051-bcad-7c7ea99aae2d" satisfied condition "success or failure" Mar 10 21:52:13.277: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-be0f4849-b25b-4051-bcad-7c7ea99aae2d container client-container: STEP: delete the pod Mar 10 21:52:13.297: INFO: Waiting for pod downwardapi-volume-be0f4849-b25b-4051-bcad-7c7ea99aae2d to disappear Mar 10 21:52:13.319: INFO: Pod downwardapi-volume-be0f4849-b25b-4051-bcad-7c7ea99aae2d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:52:13.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5832" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3057,"failed":0} ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:52:13.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 10 21:52:13.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-817' Mar 10 21:52:13.724: INFO: stderr: "" Mar 10 21:52:13.724: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 10 21:52:13.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-817' Mar 10 21:52:13.820: INFO: stderr: "" Mar 10 21:52:13.820: INFO: stdout: "update-demo-nautilus-mqbpq update-demo-nautilus-wtwnj " Mar 10 21:52:13.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqbpq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-817' Mar 10 21:52:14.260: INFO: stderr: "" Mar 10 21:52:14.260: INFO: stdout: "" Mar 10 21:52:14.260: INFO: update-demo-nautilus-mqbpq is created but not running Mar 10 21:52:19.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-817' Mar 10 21:52:19.370: INFO: stderr: "" Mar 10 21:52:19.371: INFO: stdout: "update-demo-nautilus-mqbpq update-demo-nautilus-wtwnj " Mar 10 21:52:19.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqbpq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-817' Mar 10 21:52:19.466: INFO: stderr: "" Mar 10 21:52:19.466: INFO: stdout: "true" Mar 10 21:52:19.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqbpq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-817' Mar 10 21:52:19.548: INFO: stderr: "" Mar 10 21:52:19.548: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 10 21:52:19.548: INFO: validating pod update-demo-nautilus-mqbpq Mar 10 21:52:19.551: INFO: got data: { "image": "nautilus.jpg" } Mar 10 21:52:19.551: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 10 21:52:19.551: INFO: update-demo-nautilus-mqbpq is verified up and running Mar 10 21:52:19.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wtwnj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-817' Mar 10 21:52:19.663: INFO: stderr: "" Mar 10 21:52:19.663: INFO: stdout: "true" Mar 10 21:52:19.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wtwnj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-817' Mar 10 21:52:19.740: INFO: stderr: "" Mar 10 21:52:19.741: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 10 21:52:19.741: INFO: validating pod update-demo-nautilus-wtwnj Mar 10 21:52:19.744: INFO: got data: { "image": "nautilus.jpg" } Mar 10 21:52:19.744: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 10 21:52:19.744: INFO: update-demo-nautilus-wtwnj is verified up and running STEP: rolling-update to new replication controller Mar 10 21:52:19.746: INFO: scanned /root for discovery docs: Mar 10 21:52:19.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-817' Mar 10 21:52:42.326: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 10 21:52:42.326: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 10 21:52:42.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-817' Mar 10 21:52:42.411: INFO: stderr: "" Mar 10 21:52:42.411: INFO: stdout: "update-demo-kitten-5thrv update-demo-kitten-v94tp " Mar 10 21:52:42.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5thrv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-817' Mar 10 21:52:42.503: INFO: stderr: "" Mar 10 21:52:42.503: INFO: stdout: "true" Mar 10 21:52:42.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5thrv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-817' Mar 10 21:52:42.577: INFO: stderr: "" Mar 10 21:52:42.577: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 10 21:52:42.577: INFO: validating pod update-demo-kitten-5thrv Mar 10 21:52:42.580: INFO: got data: { "image": "kitten.jpg" } Mar 10 21:52:42.580: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 10 21:52:42.580: INFO: update-demo-kitten-5thrv is verified up and running Mar 10 21:52:42.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-v94tp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-817' Mar 10 21:52:42.647: INFO: stderr: "" Mar 10 21:52:42.647: INFO: stdout: "true" Mar 10 21:52:42.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-v94tp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-817' Mar 10 21:52:42.710: INFO: stderr: "" Mar 10 21:52:42.710: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 10 21:52:42.710: INFO: validating pod update-demo-kitten-v94tp Mar 10 21:52:42.713: INFO: got data: { "image": "kitten.jpg" } Mar 10 21:52:42.713: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 10 21:52:42.713: INFO: update-demo-kitten-v94tp is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:52:42.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-817" for this suite. • [SLOW TEST:29.393 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":183,"skipped":3057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:52:42.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-5685ddec-3ab8-4ab0-8b4c-4c52bd3e26d5 STEP: Creating a pod to test consume secrets Mar 10 21:52:42.804: INFO: Waiting up to 5m0s for pod "pod-secrets-53562cbf-eed9-4381-a569-cbf00533c130" in namespace "secrets-865" to be "success or failure" Mar 10 21:52:42.814: INFO: Pod "pod-secrets-53562cbf-eed9-4381-a569-cbf00533c130": Phase="Pending", Reason="", readiness=false. Elapsed: 10.049776ms Mar 10 21:52:44.818: INFO: Pod "pod-secrets-53562cbf-eed9-4381-a569-cbf00533c130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014134172s STEP: Saw pod success Mar 10 21:52:44.818: INFO: Pod "pod-secrets-53562cbf-eed9-4381-a569-cbf00533c130" satisfied condition "success or failure" Mar 10 21:52:44.821: INFO: Trying to get logs from node jerma-worker pod pod-secrets-53562cbf-eed9-4381-a569-cbf00533c130 container secret-volume-test: STEP: delete the pod Mar 10 21:52:44.846: INFO: Waiting for pod pod-secrets-53562cbf-eed9-4381-a569-cbf00533c130 to disappear Mar 10 21:52:44.868: INFO: Pod pod-secrets-53562cbf-eed9-4381-a569-cbf00533c130 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:52:44.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-865" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":3080,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:52:44.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:52:45.407: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 10 21:52:47.417: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719473965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:52:50.492: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:53:00.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2140" for this suite. STEP: Destroying namespace "webhook-2140-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.861 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":185,"skipped":3093,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:53:00.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 21:53:00.809: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:53:02.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3460" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3101,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:53:02.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 10 21:53:03.535: INFO: Pod name wrapped-volume-race-07c3d13f-e079-49d6-a93d-07c8520e3954: Found 0 pods out of 5 Mar 10 21:53:08.541: INFO: Pod name wrapped-volume-race-07c3d13f-e079-49d6-a93d-07c8520e3954: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-07c3d13f-e079-49d6-a93d-07c8520e3954 in namespace emptydir-wrapper-2737, will wait for the garbage collector to delete the pods Mar 10 21:53:18.998: INFO: Deleting ReplicationController wrapped-volume-race-07c3d13f-e079-49d6-a93d-07c8520e3954 took: 6.834084ms Mar 10 21:53:19.298: INFO: Terminating ReplicationController wrapped-volume-race-07c3d13f-e079-49d6-a93d-07c8520e3954 pods took: 300.27454ms STEP: Creating RC which spawns configmap-volume pods Mar 10 21:53:27.134: INFO: Pod name wrapped-volume-race-044b0d9d-824a-4024-bb6a-ecf1f826e117: Found 0 pods out of 5 Mar 10 21:53:32.140: INFO: Pod name wrapped-volume-race-044b0d9d-824a-4024-bb6a-ecf1f826e117: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-044b0d9d-824a-4024-bb6a-ecf1f826e117 in namespace emptydir-wrapper-2737, will wait for the garbage collector to delete the pods Mar 10 21:53:44.291: INFO: Deleting ReplicationController wrapped-volume-race-044b0d9d-824a-4024-bb6a-ecf1f826e117 took: 19.102609ms Mar 10 21:53:44.591: INFO: Terminating ReplicationController wrapped-volume-race-044b0d9d-824a-4024-bb6a-ecf1f826e117 pods took: 300.212689ms STEP: Creating RC which spawns configmap-volume pods Mar 10 21:53:56.138: INFO: Pod name wrapped-volume-race-7d74f218-b993-4680-a1cb-2744533c5dd0: Found 0 pods out of 5 Mar 10 21:54:01.146: INFO: Pod name wrapped-volume-race-7d74f218-b993-4680-a1cb-2744533c5dd0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7d74f218-b993-4680-a1cb-2744533c5dd0 in namespace emptydir-wrapper-2737, will wait for the garbage collector to delete the pods Mar 10 21:54:11.258: INFO: Deleting ReplicationController wrapped-volume-race-7d74f218-b993-4680-a1cb-2744533c5dd0 took: 7.371806ms Mar 10 21:54:11.658: INFO: Terminating ReplicationController wrapped-volume-race-7d74f218-b993-4680-a1cb-2744533c5dd0 pods took: 400.273946ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:54:18.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2737" for this suite. • [SLOW TEST:76.043 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":187,"skipped":3104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:54:18.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 10 21:54:19.003: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 10 21:54:19.019: INFO: Waiting for terminating namespaces to be deleted... Mar 10 21:54:19.021: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 10 21:54:19.035: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:54:19.035: INFO: Container kube-proxy ready: true, restart count 0 Mar 10 21:54:19.035: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:54:19.035: INFO: Container kindnet-cni ready: true, restart count 0 Mar 10 21:54:19.035: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 10 21:54:19.131: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:54:19.131: INFO: Container kindnet-cni ready: true, restart count 0 Mar 10 21:54:19.131: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 10 21:54:19.131: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-24a9f39a-2cf3-4bef-9789-18e22f853d74 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-24a9f39a-2cf3-4bef-9789-18e22f853d74 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-24a9f39a-2cf3-4bef-9789-18e22f853d74 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:59:25.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9058" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:306.443 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":188,"skipped":3154,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:59:25.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 10 21:59:29.471: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 10 21:59:29.479: INFO: Pod pod-with-poststart-exec-hook still exists Mar 10 21:59:31.480: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 10 21:59:31.482: INFO: Pod pod-with-poststart-exec-hook still exists Mar 10 21:59:33.480: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 10 21:59:33.483: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:59:33.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3500" for this suite. • [SLOW TEST:8.125 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3175,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:59:33.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7910 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 10 21:59:33.619: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 10 21:59:51.742: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.52:8080/dial?request=hostname&protocol=udp&host=10.244.2.51&port=8081&tries=1'] Namespace:pod-network-test-7910 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:59:51.742: INFO: >>> kubeConfig: /root/.kube/config I0310 21:59:51.780637 6 log.go:172] (0xc002db64d0) (0xc0011c1180) Create stream I0310 21:59:51.780671 6 log.go:172] (0xc002db64d0) (0xc0011c1180) Stream added, broadcasting: 1 I0310 21:59:51.783516 6 log.go:172] (0xc002db64d0) Reply frame received for 1 I0310 21:59:51.783568 6 log.go:172] (0xc002db64d0) (0xc0011c1220) Create stream I0310 21:59:51.783591 6 log.go:172] (0xc002db64d0) (0xc0011c1220) Stream added, broadcasting: 3 I0310 21:59:51.784655 6 log.go:172] (0xc002db64d0) Reply frame received for 3 I0310 21:59:51.784687 6 log.go:172] (0xc002db64d0) (0xc001bdbcc0) Create stream I0310 21:59:51.784700 6 log.go:172] (0xc002db64d0) (0xc001bdbcc0) Stream added, broadcasting: 5 I0310 21:59:51.785576 6 log.go:172] (0xc002db64d0) Reply frame received for 5 I0310 21:59:51.847522 6 log.go:172] (0xc002db64d0) Data frame received for 3 I0310 21:59:51.847550 6 log.go:172] (0xc0011c1220) (3) Data frame handling I0310 21:59:51.847561 6 log.go:172] (0xc0011c1220) (3) Data frame sent I0310 21:59:51.848061 6 log.go:172] (0xc002db64d0) Data frame received for 5 I0310 21:59:51.848087 6 log.go:172] (0xc001bdbcc0) (5) Data frame handling I0310 21:59:51.848547 6 log.go:172] (0xc002db64d0) Data frame received for 3 I0310 21:59:51.848568 6 log.go:172] (0xc0011c1220) (3) Data frame handling I0310 21:59:51.850308 6 log.go:172] (0xc002db64d0) Data frame received for 1 I0310 21:59:51.850331 6 log.go:172] (0xc0011c1180) (1) Data frame handling I0310 21:59:51.850347 6 log.go:172] (0xc0011c1180) (1) Data frame sent I0310 21:59:51.850367 6 log.go:172] (0xc002db64d0) (0xc0011c1180) Stream removed, broadcasting: 1 I0310 21:59:51.850414 6 log.go:172] (0xc002db64d0) Go away received I0310 21:59:51.850448 6 log.go:172] (0xc002db64d0) (0xc0011c1180) Stream removed, broadcasting: 1 I0310 21:59:51.850458 6 log.go:172] (0xc002db64d0) (0xc0011c1220) Stream removed, broadcasting: 3 I0310 21:59:51.850467 6 log.go:172] (0xc002db64d0) (0xc001bdbcc0) Stream removed, broadcasting: 5 Mar 10 21:59:51.850: INFO: Waiting for responses: map[] Mar 10 21:59:51.853: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.52:8080/dial?request=hostname&protocol=udp&host=10.244.1.33&port=8081&tries=1'] Namespace:pod-network-test-7910 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 21:59:51.853: INFO: >>> kubeConfig: /root/.kube/config I0310 21:59:51.883413 6 log.go:172] (0xc002db6bb0) (0xc0011c14a0) Create stream I0310 21:59:51.883437 6 log.go:172] (0xc002db6bb0) (0xc0011c14a0) Stream added, broadcasting: 1 I0310 21:59:51.885911 6 log.go:172] (0xc002db6bb0) Reply frame received for 1 I0310 21:59:51.885949 6 log.go:172] (0xc002db6bb0) (0xc0011c1540) Create stream I0310 21:59:51.885960 6 log.go:172] (0xc002db6bb0) (0xc0011c1540) Stream added, broadcasting: 3 I0310 21:59:51.886815 6 log.go:172] (0xc002db6bb0) Reply frame received for 3 I0310 21:59:51.886844 6 log.go:172] (0xc002db6bb0) (0xc001bdbf40) Create stream I0310 21:59:51.886855 6 log.go:172] (0xc002db6bb0) (0xc001bdbf40) Stream added, broadcasting: 5 I0310 21:59:51.887713 6 log.go:172] (0xc002db6bb0) Reply frame received for 5 I0310 21:59:51.957547 6 log.go:172] (0xc002db6bb0) Data frame received for 3 I0310 21:59:51.957590 6 log.go:172] (0xc0011c1540) (3) Data frame handling I0310 21:59:51.957623 6 log.go:172] (0xc0011c1540) (3) Data frame sent I0310 21:59:51.958089 6 log.go:172] (0xc002db6bb0) Data frame received for 3 I0310 21:59:51.958169 6 log.go:172] (0xc0011c1540) (3) Data frame handling I0310 21:59:51.958202 6 log.go:172] (0xc002db6bb0) Data frame received for 5 I0310 21:59:51.958215 6 log.go:172] (0xc001bdbf40) (5) Data frame handling I0310 21:59:51.960094 6 log.go:172] (0xc002db6bb0) Data frame received for 1 I0310 21:59:51.960117 6 log.go:172] (0xc0011c14a0) (1) Data frame handling I0310 21:59:51.960131 6 log.go:172] (0xc0011c14a0) (1) Data frame sent I0310 21:59:51.960147 6 log.go:172] (0xc002db6bb0) (0xc0011c14a0) Stream removed, broadcasting: 1 I0310 21:59:51.960164 6 log.go:172] (0xc002db6bb0) Go away received I0310 21:59:51.960359 6 log.go:172] (0xc002db6bb0) (0xc0011c14a0) Stream removed, broadcasting: 1 I0310 21:59:51.960384 6 log.go:172] (0xc002db6bb0) (0xc0011c1540) Stream removed, broadcasting: 3 I0310 21:59:51.960396 6 log.go:172] (0xc002db6bb0) (0xc001bdbf40) Stream removed, broadcasting: 5 Mar 10 21:59:51.960: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:59:51.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7910" for this suite. • [SLOW TEST:18.506 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3177,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:59:51.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:59:52.079: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9ef96fb-5561-4ca1-aed3-9fa6adbb8fc4" in namespace "downward-api-4389" to be "success or failure" Mar 10 21:59:52.085: INFO: Pod "downwardapi-volume-e9ef96fb-5561-4ca1-aed3-9fa6adbb8fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.188817ms Mar 10 21:59:54.089: INFO: Pod "downwardapi-volume-e9ef96fb-5561-4ca1-aed3-9fa6adbb8fc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010207387s STEP: Saw pod success Mar 10 21:59:54.089: INFO: Pod "downwardapi-volume-e9ef96fb-5561-4ca1-aed3-9fa6adbb8fc4" satisfied condition "success or failure" Mar 10 21:59:54.092: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e9ef96fb-5561-4ca1-aed3-9fa6adbb8fc4 container client-container: STEP: delete the pod Mar 10 21:59:54.122: INFO: Waiting for pod downwardapi-volume-e9ef96fb-5561-4ca1-aed3-9fa6adbb8fc4 to disappear Mar 10 21:59:54.126: INFO: Pod downwardapi-volume-e9ef96fb-5561-4ca1-aed3-9fa6adbb8fc4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:59:54.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4389" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3193,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:59:54.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 21:59:54.594: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 21:59:57.635: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 21:59:57.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8157" for this suite. STEP: Destroying namespace "webhook-8157-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":192,"skipped":3200,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 21:59:57.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 21:59:57.949: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8804ef41-34c6-4aff-b4e2-ff3689ca1892" in namespace "projected-2702" to be "success or failure" Mar 10 21:59:57.953: INFO: Pod "downwardapi-volume-8804ef41-34c6-4aff-b4e2-ff3689ca1892": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27581ms Mar 10 21:59:59.957: INFO: Pod "downwardapi-volume-8804ef41-34c6-4aff-b4e2-ff3689ca1892": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008579852s Mar 10 22:00:01.962: INFO: Pod "downwardapi-volume-8804ef41-34c6-4aff-b4e2-ff3689ca1892": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013005811s STEP: Saw pod success Mar 10 22:00:01.962: INFO: Pod "downwardapi-volume-8804ef41-34c6-4aff-b4e2-ff3689ca1892" satisfied condition "success or failure" Mar 10 22:00:01.972: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8804ef41-34c6-4aff-b4e2-ff3689ca1892 container client-container: STEP: delete the pod Mar 10 22:00:02.018: INFO: Waiting for pod downwardapi-volume-8804ef41-34c6-4aff-b4e2-ff3689ca1892 to disappear Mar 10 22:00:02.032: INFO: Pod downwardapi-volume-8804ef41-34c6-4aff-b4e2-ff3689ca1892 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:00:02.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2702" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3209,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:00:02.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 10 22:00:02.088: INFO: Waiting up to 5m0s for pod "pod-3d8b3cf9-753d-4e5f-b642-78f9682969e2" in namespace "emptydir-1468" to be "success or failure" Mar 10 22:00:02.091: INFO: Pod "pod-3d8b3cf9-753d-4e5f-b642-78f9682969e2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.219179ms Mar 10 22:00:04.095: INFO: Pod "pod-3d8b3cf9-753d-4e5f-b642-78f9682969e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006519607s STEP: Saw pod success Mar 10 22:00:04.095: INFO: Pod "pod-3d8b3cf9-753d-4e5f-b642-78f9682969e2" satisfied condition "success or failure" Mar 10 22:00:04.097: INFO: Trying to get logs from node jerma-worker pod pod-3d8b3cf9-753d-4e5f-b642-78f9682969e2 container test-container: STEP: delete the pod Mar 10 22:00:04.135: INFO: Waiting for pod pod-3d8b3cf9-753d-4e5f-b642-78f9682969e2 to disappear Mar 10 22:00:04.145: INFO: Pod pod-3d8b3cf9-753d-4e5f-b642-78f9682969e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:00:04.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1468" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3224,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:00:04.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 10 22:00:04.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8559' Mar 10 22:00:05.920: INFO: stderr: "" Mar 10 22:00:05.920: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 10 22:00:10.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8559 -o json' Mar 10 22:00:11.074: INFO: stderr: "" Mar 10 22:00:11.074: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-10T22:00:05Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8559\",\n \"resourceVersion\": \"686998\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8559/pods/e2e-test-httpd-pod\",\n \"uid\": \"4dc7d8e3-3525-4202-9cb9-35789cb79a5d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-6rk22\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-6rk22\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-6rk22\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-10T22:00:05Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-10T22:00:07Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-10T22:00:07Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-10T22:00:05Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a17b828b0a3869875669fe99fbc3ef1d6ee95e97f15f2bdff12acb1f14a89515\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-10T22:00:07Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.36\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.36\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-10T22:00:05Z\"\n }\n}\n" STEP: replace the image in the pod Mar 10 22:00:11.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8559' Mar 10 22:00:11.371: INFO: stderr: "" Mar 10 22:00:11.371: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882 Mar 10 22:00:11.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8559' Mar 10 22:00:16.016: INFO: stderr: "" Mar 10 22:00:16.016: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:00:16.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8559" for this suite. • [SLOW TEST:11.874 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":195,"skipped":3235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:00:16.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 22:00:16.090: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 10 22:00:17.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5206 create -f -' Mar 10 22:00:20.014: INFO: stderr: "" Mar 10 22:00:20.014: INFO: stdout: "e2e-test-crd-publish-openapi-7941-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 10 22:00:20.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5206 delete e2e-test-crd-publish-openapi-7941-crds test-cr' Mar 10 22:00:20.149: INFO: stderr: "" Mar 10 22:00:20.150: INFO: stdout: "e2e-test-crd-publish-openapi-7941-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 10 22:00:20.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5206 apply -f -' Mar 10 22:00:20.444: INFO: stderr: "" Mar 10 22:00:20.444: INFO: stdout: "e2e-test-crd-publish-openapi-7941-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 10 22:00:20.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5206 delete e2e-test-crd-publish-openapi-7941-crds test-cr' Mar 10 22:00:20.556: INFO: stderr: "" Mar 10 22:00:20.556: INFO: stdout: "e2e-test-crd-publish-openapi-7941-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 10 22:00:20.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7941-crds' Mar 10 22:00:20.782: INFO: stderr: "" Mar 10 22:00:20.782: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7941-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:00:22.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5206" for this suite. • [SLOW TEST:6.609 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":196,"skipped":3284,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:00:22.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 10 22:00:22.676: INFO: namespace kubectl-8870 Mar 10 22:00:22.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8870' Mar 10 22:00:22.985: INFO: stderr: "" Mar 10 22:00:22.985: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 10 22:00:23.989: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 22:00:23.989: INFO: Found 0 / 1 Mar 10 22:00:24.989: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 22:00:24.989: INFO: Found 1 / 1 Mar 10 22:00:24.989: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 10 22:00:24.992: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 22:00:24.992: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 10 22:00:24.992: INFO: wait on agnhost-master startup in kubectl-8870 Mar 10 22:00:24.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-9q6l7 agnhost-master --namespace=kubectl-8870' Mar 10 22:00:25.123: INFO: stderr: "" Mar 10 22:00:25.123: INFO: stdout: "Paused\n" STEP: exposing RC Mar 10 22:00:25.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8870' Mar 10 22:00:25.251: INFO: stderr: "" Mar 10 22:00:25.251: INFO: stdout: "service/rm2 exposed\n" Mar 10 22:00:25.254: INFO: Service rm2 in namespace kubectl-8870 found. STEP: exposing service Mar 10 22:00:27.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8870' Mar 10 22:00:27.464: INFO: stderr: "" Mar 10 22:00:27.464: INFO: stdout: "service/rm3 exposed\n" Mar 10 22:00:27.476: INFO: Service rm3 in namespace kubectl-8870 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:00:29.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8870" for this suite. • [SLOW TEST:6.872 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":197,"skipped":3301,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:00:29.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 10 22:00:29.646: INFO: Waiting up to 5m0s for pod "downward-api-8fa0d610-8a13-47e7-b1f1-1b2cbf6f3186" in namespace "downward-api-614" to be "success or failure" Mar 10 22:00:29.649: INFO: Pod "downward-api-8fa0d610-8a13-47e7-b1f1-1b2cbf6f3186": Phase="Pending", Reason="", readiness=false. Elapsed: 3.429699ms Mar 10 22:00:31.654: INFO: Pod "downward-api-8fa0d610-8a13-47e7-b1f1-1b2cbf6f3186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00786187s STEP: Saw pod success Mar 10 22:00:31.654: INFO: Pod "downward-api-8fa0d610-8a13-47e7-b1f1-1b2cbf6f3186" satisfied condition "success or failure" Mar 10 22:00:31.657: INFO: Trying to get logs from node jerma-worker pod downward-api-8fa0d610-8a13-47e7-b1f1-1b2cbf6f3186 container dapi-container: STEP: delete the pod Mar 10 22:00:31.688: INFO: Waiting for pod downward-api-8fa0d610-8a13-47e7-b1f1-1b2cbf6f3186 to disappear Mar 10 22:00:31.691: INFO: Pod downward-api-8fa0d610-8a13-47e7-b1f1-1b2cbf6f3186 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:00:31.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-614" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3309,"failed":0} ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:00:31.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 10 22:00:31.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2427' Mar 10 22:00:32.035: INFO: stderr: "" Mar 10 22:00:32.035: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 10 22:00:32.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2427' Mar 10 22:00:32.155: INFO: stderr: "" Mar 10 22:00:32.155: INFO: stdout: "update-demo-nautilus-d7mx7 update-demo-nautilus-mdv7w " Mar 10 22:00:32.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d7mx7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2427' Mar 10 22:00:32.230: INFO: stderr: "" Mar 10 22:00:32.230: INFO: stdout: "" Mar 10 22:00:32.230: INFO: update-demo-nautilus-d7mx7 is created but not running Mar 10 22:00:37.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2427' Mar 10 22:00:37.344: INFO: stderr: "" Mar 10 22:00:37.344: INFO: stdout: "update-demo-nautilus-d7mx7 update-demo-nautilus-mdv7w " Mar 10 22:00:37.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d7mx7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2427' Mar 10 22:00:37.438: INFO: stderr: "" Mar 10 22:00:37.438: INFO: stdout: "true" Mar 10 22:00:37.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d7mx7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2427' Mar 10 22:00:37.526: INFO: stderr: "" Mar 10 22:00:37.526: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 10 22:00:37.526: INFO: validating pod update-demo-nautilus-d7mx7 Mar 10 22:00:37.531: INFO: got data: { "image": "nautilus.jpg" } Mar 10 22:00:37.531: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 10 22:00:37.532: INFO: update-demo-nautilus-d7mx7 is verified up and running Mar 10 22:00:37.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mdv7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2427' Mar 10 22:00:37.613: INFO: stderr: "" Mar 10 22:00:37.613: INFO: stdout: "true" Mar 10 22:00:37.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mdv7w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2427' Mar 10 22:00:37.694: INFO: stderr: "" Mar 10 22:00:37.694: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 10 22:00:37.694: INFO: validating pod update-demo-nautilus-mdv7w Mar 10 22:00:37.697: INFO: got data: { "image": "nautilus.jpg" } Mar 10 22:00:37.697: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 10 22:00:37.697: INFO: update-demo-nautilus-mdv7w is verified up and running STEP: using delete to clean up resources Mar 10 22:00:37.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2427' Mar 10 22:00:37.772: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 10 22:00:37.772: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 10 22:00:37.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2427' Mar 10 22:00:37.841: INFO: stderr: "No resources found in kubectl-2427 namespace.\n" Mar 10 22:00:37.841: INFO: stdout: "" Mar 10 22:00:37.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2427 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 10 22:00:37.927: INFO: stderr: "" Mar 10 22:00:37.927: INFO: stdout: "update-demo-nautilus-d7mx7\nupdate-demo-nautilus-mdv7w\n" Mar 10 22:00:38.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2427' Mar 10 22:00:38.519: INFO: stderr: "No resources found in kubectl-2427 namespace.\n" Mar 10 22:00:38.519: INFO: stdout: "" Mar 10 22:00:38.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2427 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 10 22:00:38.595: INFO: stderr: "" Mar 10 22:00:38.595: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:00:38.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2427" for this suite. • [SLOW TEST:6.905 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":199,"skipped":3309,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:00:38.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7033.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7033.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7033.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7033.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7033.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7033.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 10 22:00:42.728: INFO: DNS probes using dns-7033/dns-test-9a8f976c-9f1a-4559-9f1e-a5428eca72a9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:00:42.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7033" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":200,"skipped":3328,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:00:42.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 22:00:42.886: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fdd40683-79c5-466e-9253-f6d2f4ecbbe4" in namespace "projected-9322" to be "success or failure" Mar 10 22:00:42.926: INFO: Pod "downwardapi-volume-fdd40683-79c5-466e-9253-f6d2f4ecbbe4": Phase="Pending", Reason="", readiness=false. Elapsed: 40.144804ms Mar 10 22:00:44.930: INFO: Pod "downwardapi-volume-fdd40683-79c5-466e-9253-f6d2f4ecbbe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044374263s Mar 10 22:00:46.934: INFO: Pod "downwardapi-volume-fdd40683-79c5-466e-9253-f6d2f4ecbbe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048331438s STEP: Saw pod success Mar 10 22:00:46.934: INFO: Pod "downwardapi-volume-fdd40683-79c5-466e-9253-f6d2f4ecbbe4" satisfied condition "success or failure" Mar 10 22:00:46.937: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-fdd40683-79c5-466e-9253-f6d2f4ecbbe4 container client-container: STEP: delete the pod Mar 10 22:00:46.988: INFO: Waiting for pod downwardapi-volume-fdd40683-79c5-466e-9253-f6d2f4ecbbe4 to disappear Mar 10 22:00:46.998: INFO: Pod downwardapi-volume-fdd40683-79c5-466e-9253-f6d2f4ecbbe4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:00:46.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9322" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3342,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:00:47.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9450 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9450 I0310 22:00:47.142841 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9450, replica count: 2 I0310 22:00:50.193252 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 10 22:00:50.193: INFO: Creating new exec pod Mar 10 22:00:53.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9450 execpoddnhtd -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 10 22:00:53.422: INFO: stderr: "I0310 22:00:53.350259 3338 log.go:172] (0xc000a07970) (0xc000b9e960) Create stream\nI0310 22:00:53.350313 3338 log.go:172] (0xc000a07970) (0xc000b9e960) Stream added, broadcasting: 1\nI0310 22:00:53.353873 3338 log.go:172] (0xc000a07970) Reply frame received for 1\nI0310 22:00:53.353912 3338 log.go:172] (0xc000a07970) (0xc0005f8640) Create stream\nI0310 22:00:53.353924 3338 log.go:172] (0xc000a07970) (0xc0005f8640) Stream added, broadcasting: 3\nI0310 22:00:53.354991 3338 log.go:172] (0xc000a07970) Reply frame received for 3\nI0310 22:00:53.355042 3338 log.go:172] (0xc000a07970) (0xc00034f400) Create stream\nI0310 22:00:53.355068 3338 log.go:172] (0xc000a07970) (0xc00034f400) Stream added, broadcasting: 5\nI0310 22:00:53.355880 3338 log.go:172] (0xc000a07970) Reply frame received for 5\nI0310 22:00:53.416110 3338 log.go:172] (0xc000a07970) Data frame received for 5\nI0310 22:00:53.416135 3338 log.go:172] (0xc00034f400) (5) Data frame handling\nI0310 22:00:53.416152 3338 log.go:172] (0xc00034f400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0310 22:00:53.417155 3338 log.go:172] (0xc000a07970) Data frame received for 3\nI0310 22:00:53.417187 3338 log.go:172] (0xc0005f8640) (3) Data frame handling\nI0310 22:00:53.417210 3338 log.go:172] (0xc000a07970) Data frame received for 5\nI0310 22:00:53.417220 3338 log.go:172] (0xc00034f400) (5) Data frame handling\nI0310 22:00:53.417227 3338 log.go:172] (0xc00034f400) (5) Data frame sent\nI0310 22:00:53.417235 3338 log.go:172] (0xc000a07970) Data frame received for 5\nI0310 22:00:53.417240 3338 log.go:172] (0xc00034f400) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0310 22:00:53.418747 3338 log.go:172] (0xc000a07970) Data frame received for 1\nI0310 22:00:53.418774 3338 log.go:172] (0xc000b9e960) (1) Data frame handling\nI0310 22:00:53.418784 3338 log.go:172] (0xc000b9e960) (1) Data frame sent\nI0310 22:00:53.418982 3338 log.go:172] (0xc000a07970) (0xc000b9e960) Stream removed, broadcasting: 1\nI0310 22:00:53.419024 3338 log.go:172] (0xc000a07970) Go away received\nI0310 22:00:53.419373 3338 log.go:172] (0xc000a07970) (0xc000b9e960) Stream removed, broadcasting: 1\nI0310 22:00:53.419387 3338 log.go:172] (0xc000a07970) (0xc0005f8640) Stream removed, broadcasting: 3\nI0310 22:00:53.419394 3338 log.go:172] (0xc000a07970) (0xc00034f400) Stream removed, broadcasting: 5\n" Mar 10 22:00:53.422: INFO: stdout: "" Mar 10 22:00:53.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9450 execpoddnhtd -- /bin/sh -x -c nc -zv -t -w 2 10.101.100.119 80' Mar 10 22:00:53.625: INFO: stderr: "I0310 22:00:53.559764 3358 log.go:172] (0xc0008fe580) (0xc000739540) Create stream\nI0310 22:00:53.559809 3358 log.go:172] (0xc0008fe580) (0xc000739540) Stream added, broadcasting: 1\nI0310 22:00:53.561570 3358 log.go:172] (0xc0008fe580) Reply frame received for 1\nI0310 22:00:53.561606 3358 log.go:172] (0xc0008fe580) (0xc0008f0000) Create stream\nI0310 22:00:53.561614 3358 log.go:172] (0xc0008fe580) (0xc0008f0000) Stream added, broadcasting: 3\nI0310 22:00:53.562427 3358 log.go:172] (0xc0008fe580) Reply frame received for 3\nI0310 22:00:53.562447 3358 log.go:172] (0xc0008fe580) (0xc0008f00a0) Create stream\nI0310 22:00:53.562453 3358 log.go:172] (0xc0008fe580) (0xc0008f00a0) Stream added, broadcasting: 5\nI0310 22:00:53.563050 3358 log.go:172] (0xc0008fe580) Reply frame received for 5\nI0310 22:00:53.620065 3358 log.go:172] (0xc0008fe580) Data frame received for 3\nI0310 22:00:53.620095 3358 log.go:172] (0xc0008f0000) (3) Data frame handling\nI0310 22:00:53.620114 3358 log.go:172] (0xc0008fe580) Data frame received for 5\nI0310 22:00:53.620119 3358 log.go:172] (0xc0008f00a0) (5) Data frame handling\nI0310 22:00:53.620126 3358 log.go:172] (0xc0008f00a0) (5) Data frame sent\nI0310 22:00:53.620132 3358 log.go:172] (0xc0008fe580) Data frame received for 5\nI0310 22:00:53.620136 3358 log.go:172] (0xc0008f00a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.100.119 80\nConnection to 10.101.100.119 80 port [tcp/http] succeeded!\nI0310 22:00:53.621198 3358 log.go:172] (0xc0008fe580) Data frame received for 1\nI0310 22:00:53.621215 3358 log.go:172] (0xc000739540) (1) Data frame handling\nI0310 22:00:53.621222 3358 log.go:172] (0xc000739540) (1) Data frame sent\nI0310 22:00:53.621235 3358 log.go:172] (0xc0008fe580) (0xc000739540) Stream removed, broadcasting: 1\nI0310 22:00:53.621254 3358 log.go:172] (0xc0008fe580) Go away received\nI0310 22:00:53.621567 3358 log.go:172] (0xc0008fe580) (0xc000739540) Stream removed, broadcasting: 1\nI0310 22:00:53.621591 3358 log.go:172] (0xc0008fe580) (0xc0008f0000) Stream removed, broadcasting: 3\nI0310 22:00:53.621600 3358 log.go:172] (0xc0008fe580) (0xc0008f00a0) Stream removed, broadcasting: 5\n" Mar 10 22:00:53.625: INFO: stdout: "" Mar 10 22:00:53.625: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:00:53.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9450" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:6.662 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":202,"skipped":3353,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:00:53.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 22:00:54.454: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 22:00:57.504: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 22:00:57.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5032-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:00:58.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5070" for this suite. STEP: Destroying namespace "webhook-5070-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.222 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":203,"skipped":3359,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:00:58.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 22:00:59.013: INFO: Creating ReplicaSet my-hostname-basic-350a28af-15be-4dab-a461-ea2f4e678607 Mar 10 22:00:59.047: INFO: Pod name my-hostname-basic-350a28af-15be-4dab-a461-ea2f4e678607: Found 0 pods out of 1 Mar 10 22:01:04.053: INFO: Pod name my-hostname-basic-350a28af-15be-4dab-a461-ea2f4e678607: Found 1 pods out of 1 Mar 10 22:01:04.053: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-350a28af-15be-4dab-a461-ea2f4e678607" is running Mar 10 22:01:04.055: INFO: Pod "my-hostname-basic-350a28af-15be-4dab-a461-ea2f4e678607-j6rsw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-10 22:00:59 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-10 22:01:00 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-10 22:01:00 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-10 22:00:59 +0000 UTC Reason: Message:}]) Mar 10 22:01:04.055: INFO: Trying to dial the pod Mar 10 22:01:09.065: INFO: Controller my-hostname-basic-350a28af-15be-4dab-a461-ea2f4e678607: Got expected result from replica 1 [my-hostname-basic-350a28af-15be-4dab-a461-ea2f4e678607-j6rsw]: "my-hostname-basic-350a28af-15be-4dab-a461-ea2f4e678607-j6rsw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:01:09.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6370" for this suite. • [SLOW TEST:10.182 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":204,"skipped":3375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:01:09.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:01:09.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3383" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":205,"skipped":3422,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:01:09.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-52t2 STEP: Creating a pod to test atomic-volume-subpath Mar 10 22:01:09.270: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-52t2" in namespace "subpath-7174" to be "success or failure" Mar 10 22:01:09.293: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.342285ms Mar 10 22:01:11.316: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Running", Reason="", readiness=true. Elapsed: 2.046409113s Mar 10 22:01:13.319: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Running", Reason="", readiness=true. Elapsed: 4.049865308s Mar 10 22:01:15.323: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Running", Reason="", readiness=true. Elapsed: 6.05371586s Mar 10 22:01:17.327: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Running", Reason="", readiness=true. Elapsed: 8.05725899s Mar 10 22:01:19.331: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Running", Reason="", readiness=true. Elapsed: 10.061101018s Mar 10 22:01:21.335: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Running", Reason="", readiness=true. Elapsed: 12.065215828s Mar 10 22:01:23.338: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Running", Reason="", readiness=true. Elapsed: 14.068769017s Mar 10 22:01:25.344: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Running", Reason="", readiness=true. Elapsed: 16.074075019s Mar 10 22:01:27.347: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Running", Reason="", readiness=true. Elapsed: 18.077666013s Mar 10 22:01:29.351: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Running", Reason="", readiness=true. Elapsed: 20.081594771s Mar 10 22:01:31.355: INFO: Pod "pod-subpath-test-configmap-52t2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.08590869s STEP: Saw pod success Mar 10 22:01:31.356: INFO: Pod "pod-subpath-test-configmap-52t2" satisfied condition "success or failure" Mar 10 22:01:31.358: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-52t2 container test-container-subpath-configmap-52t2: STEP: delete the pod Mar 10 22:01:31.391: INFO: Waiting for pod pod-subpath-test-configmap-52t2 to disappear Mar 10 22:01:31.395: INFO: Pod pod-subpath-test-configmap-52t2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-52t2 Mar 10 22:01:31.395: INFO: Deleting pod "pod-subpath-test-configmap-52t2" in namespace "subpath-7174" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:01:31.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7174" for this suite. • [SLOW TEST:22.250 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":206,"skipped":3438,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:01:31.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 22:01:31.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b69d980-5336-4a8b-8d5d-019bd8ccda2a" in namespace "projected-1517" to be "success or failure" Mar 10 22:01:31.479: INFO: Pod "downwardapi-volume-3b69d980-5336-4a8b-8d5d-019bd8ccda2a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.214215ms Mar 10 22:01:33.482: INFO: Pod "downwardapi-volume-3b69d980-5336-4a8b-8d5d-019bd8ccda2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006174583s STEP: Saw pod success Mar 10 22:01:33.482: INFO: Pod "downwardapi-volume-3b69d980-5336-4a8b-8d5d-019bd8ccda2a" satisfied condition "success or failure" Mar 10 22:01:33.483: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3b69d980-5336-4a8b-8d5d-019bd8ccda2a container client-container: STEP: delete the pod Mar 10 22:01:33.519: INFO: Waiting for pod downwardapi-volume-3b69d980-5336-4a8b-8d5d-019bd8ccda2a to disappear Mar 10 22:01:33.526: INFO: Pod downwardapi-volume-3b69d980-5336-4a8b-8d5d-019bd8ccda2a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:01:33.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1517" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3442,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:01:33.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-bedd8e9a-0f08-4c30-b85b-b2df9df2a88c STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:01:37.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5055" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3461,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:01:37.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 10 22:01:37.733: INFO: Waiting up to 5m0s for pod "pod-69d48e73-cd3c-4d02-a577-e733d8901ed9" in namespace "emptydir-1788" to be "success or failure" Mar 10 22:01:37.752: INFO: Pod "pod-69d48e73-cd3c-4d02-a577-e733d8901ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.963963ms Mar 10 22:01:39.772: INFO: Pod "pod-69d48e73-cd3c-4d02-a577-e733d8901ed9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.038991543s STEP: Saw pod success Mar 10 22:01:39.772: INFO: Pod "pod-69d48e73-cd3c-4d02-a577-e733d8901ed9" satisfied condition "success or failure" Mar 10 22:01:39.774: INFO: Trying to get logs from node jerma-worker2 pod pod-69d48e73-cd3c-4d02-a577-e733d8901ed9 container test-container: STEP: delete the pod Mar 10 22:01:39.806: INFO: Waiting for pod pod-69d48e73-cd3c-4d02-a577-e733d8901ed9 to disappear Mar 10 22:01:39.809: INFO: Pod pod-69d48e73-cd3c-4d02-a577-e733d8901ed9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:01:39.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1788" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3475,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:01:39.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2340 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 10 22:01:39.935: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 10 22:02:00.181: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.62:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2340 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 22:02:00.181: INFO: >>> kubeConfig: /root/.kube/config I0310 22:02:00.217914 6 log.go:172] (0xc002ae3810) (0xc001206820) Create stream I0310 22:02:00.217949 6 log.go:172] (0xc002ae3810) (0xc001206820) Stream added, broadcasting: 1 I0310 22:02:00.219978 6 log.go:172] (0xc002ae3810) Reply frame received for 1 I0310 22:02:00.220023 6 log.go:172] (0xc002ae3810) (0xc001206960) Create stream I0310 22:02:00.220037 6 log.go:172] (0xc002ae3810) (0xc001206960) Stream added, broadcasting: 3 I0310 22:02:00.221075 6 log.go:172] (0xc002ae3810) Reply frame received for 3 I0310 22:02:00.221113 6 log.go:172] (0xc002ae3810) (0xc00181c140) Create stream I0310 22:02:00.221128 6 log.go:172] (0xc002ae3810) (0xc00181c140) Stream added, broadcasting: 5 I0310 22:02:00.222045 6 log.go:172] (0xc002ae3810) Reply frame received for 5 I0310 22:02:00.289893 6 log.go:172] (0xc002ae3810) Data frame received for 5 I0310 22:02:00.289929 6 log.go:172] (0xc00181c140) (5) Data frame handling I0310 22:02:00.289953 6 log.go:172] (0xc002ae3810) Data frame received for 3 I0310 22:02:00.289966 6 log.go:172] (0xc001206960) (3) Data frame handling I0310 22:02:00.289980 6 log.go:172] (0xc001206960) (3) Data frame sent I0310 22:02:00.289992 6 log.go:172] (0xc002ae3810) Data frame received for 3 I0310 22:02:00.290004 6 log.go:172] (0xc001206960) (3) Data frame handling I0310 22:02:00.291480 6 log.go:172] (0xc002ae3810) Data frame received for 1 I0310 22:02:00.291506 6 log.go:172] (0xc001206820) (1) Data frame handling I0310 22:02:00.291522 6 log.go:172] (0xc001206820) (1) Data frame sent I0310 22:02:00.291538 6 log.go:172] (0xc002ae3810) (0xc001206820) Stream removed, broadcasting: 1 I0310 22:02:00.291555 6 log.go:172] (0xc002ae3810) Go away received I0310 22:02:00.291674 6 log.go:172] (0xc002ae3810) (0xc001206820) Stream removed, broadcasting: 1 I0310 22:02:00.291691 6 log.go:172] (0xc002ae3810) (0xc001206960) Stream removed, broadcasting: 3 I0310 22:02:00.291703 6 log.go:172] (0xc002ae3810) (0xc00181c140) Stream removed, broadcasting: 5 Mar 10 22:02:00.291: INFO: Found all expected endpoints: [netserver-0] Mar 10 22:02:00.294: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.45:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2340 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 22:02:00.294: INFO: >>> kubeConfig: /root/.kube/config I0310 22:02:00.319337 6 log.go:172] (0xc0022f1760) (0xc001966500) Create stream I0310 22:02:00.319361 6 log.go:172] (0xc0022f1760) (0xc001966500) Stream added, broadcasting: 1 I0310 22:02:00.326300 6 log.go:172] (0xc0022f1760) Reply frame received for 1 I0310 22:02:00.326336 6 log.go:172] (0xc0022f1760) (0xc00171b7c0) Create stream I0310 22:02:00.326360 6 log.go:172] (0xc0022f1760) (0xc00171b7c0) Stream added, broadcasting: 3 I0310 22:02:00.327437 6 log.go:172] (0xc0022f1760) Reply frame received for 3 I0310 22:02:00.327490 6 log.go:172] (0xc0022f1760) (0xc001206a00) Create stream I0310 22:02:00.327502 6 log.go:172] (0xc0022f1760) (0xc001206a00) Stream added, broadcasting: 5 I0310 22:02:00.328403 6 log.go:172] (0xc0022f1760) Reply frame received for 5 I0310 22:02:00.407730 6 log.go:172] (0xc0022f1760) Data frame received for 3 I0310 22:02:00.407761 6 log.go:172] (0xc00171b7c0) (3) Data frame handling I0310 22:02:00.407783 6 log.go:172] (0xc00171b7c0) (3) Data frame sent I0310 22:02:00.407840 6 log.go:172] (0xc0022f1760) Data frame received for 3 I0310 22:02:00.407865 6 log.go:172] (0xc0022f1760) Data frame received for 5 I0310 22:02:00.407891 6 log.go:172] (0xc001206a00) (5) Data frame handling I0310 22:02:00.407909 6 log.go:172] (0xc00171b7c0) (3) Data frame handling I0310 22:02:00.409395 6 log.go:172] (0xc0022f1760) Data frame received for 1 I0310 22:02:00.409413 6 log.go:172] (0xc001966500) (1) Data frame handling I0310 22:02:00.409427 6 log.go:172] (0xc001966500) (1) Data frame sent I0310 22:02:00.409660 6 log.go:172] (0xc0022f1760) (0xc001966500) Stream removed, broadcasting: 1 I0310 22:02:00.409688 6 log.go:172] (0xc0022f1760) Go away received I0310 22:02:00.409778 6 log.go:172] (0xc0022f1760) (0xc001966500) Stream removed, broadcasting: 1 I0310 22:02:00.409802 6 log.go:172] (0xc0022f1760) (0xc00171b7c0) Stream removed, broadcasting: 3 I0310 22:02:00.409816 6 log.go:172] (0xc0022f1760) (0xc001206a00) Stream removed, broadcasting: 5 Mar 10 22:02:00.409: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:00.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2340" for this suite. • [SLOW TEST:20.600 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3476,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:00.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-a37577d6-91e6-481e-a00b-a8a7dd7127a8 STEP: Creating a pod to test consume configMaps Mar 10 22:02:00.878: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ca9d749-6cee-452d-85ab-bd1499b8cd49" in namespace "configmap-9770" to be "success or failure" Mar 10 22:02:00.881: INFO: Pod "pod-configmaps-5ca9d749-6cee-452d-85ab-bd1499b8cd49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.698693ms Mar 10 22:02:02.884: INFO: Pod "pod-configmaps-5ca9d749-6cee-452d-85ab-bd1499b8cd49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006244792s Mar 10 22:02:04.888: INFO: Pod "pod-configmaps-5ca9d749-6cee-452d-85ab-bd1499b8cd49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009943535s STEP: Saw pod success Mar 10 22:02:04.888: INFO: Pod "pod-configmaps-5ca9d749-6cee-452d-85ab-bd1499b8cd49" satisfied condition "success or failure" Mar 10 22:02:04.890: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-5ca9d749-6cee-452d-85ab-bd1499b8cd49 container configmap-volume-test: STEP: delete the pod Mar 10 22:02:04.953: INFO: Waiting for pod pod-configmaps-5ca9d749-6cee-452d-85ab-bd1499b8cd49 to disappear Mar 10 22:02:04.977: INFO: Pod pod-configmaps-5ca9d749-6cee-452d-85ab-bd1499b8cd49 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:04.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9770" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3484,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:04.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-76f1080a-4bda-4f76-9025-277b27a7e0db STEP: Creating a pod to test consume configMaps Mar 10 22:02:05.034: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f1cff35-23c3-4dd2-bf50-652d5aa440b7" in namespace "configmap-9882" to be "success or failure" Mar 10 22:02:05.037: INFO: Pod "pod-configmaps-5f1cff35-23c3-4dd2-bf50-652d5aa440b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.539846ms Mar 10 22:02:07.041: INFO: Pod "pod-configmaps-5f1cff35-23c3-4dd2-bf50-652d5aa440b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00711989s STEP: Saw pod success Mar 10 22:02:07.041: INFO: Pod "pod-configmaps-5f1cff35-23c3-4dd2-bf50-652d5aa440b7" satisfied condition "success or failure" Mar 10 22:02:07.044: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-5f1cff35-23c3-4dd2-bf50-652d5aa440b7 container configmap-volume-test: STEP: delete the pod Mar 10 22:02:07.063: INFO: Waiting for pod pod-configmaps-5f1cff35-23c3-4dd2-bf50-652d5aa440b7 to disappear Mar 10 22:02:07.066: INFO: Pod pod-configmaps-5f1cff35-23c3-4dd2-bf50-652d5aa440b7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:07.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9882" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:07.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:13.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4802" for this suite. • [SLOW TEST:6.062 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":213,"skipped":3524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:13.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c5355a92-c791-468c-ba8e-e9036ef69e4e STEP: Creating a pod to test consume configMaps Mar 10 22:02:13.224: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c7d6250-577f-40d5-ae17-6189aefb49c4" in namespace "configmap-1407" to be "success or failure" Mar 10 22:02:13.248: INFO: Pod "pod-configmaps-6c7d6250-577f-40d5-ae17-6189aefb49c4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.762255ms Mar 10 22:02:15.252: INFO: Pod "pod-configmaps-6c7d6250-577f-40d5-ae17-6189aefb49c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02874671s Mar 10 22:02:17.256: INFO: Pod "pod-configmaps-6c7d6250-577f-40d5-ae17-6189aefb49c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032263318s STEP: Saw pod success Mar 10 22:02:17.256: INFO: Pod "pod-configmaps-6c7d6250-577f-40d5-ae17-6189aefb49c4" satisfied condition "success or failure" Mar 10 22:02:17.258: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6c7d6250-577f-40d5-ae17-6189aefb49c4 container configmap-volume-test: STEP: delete the pod Mar 10 22:02:17.279: INFO: Waiting for pod pod-configmaps-6c7d6250-577f-40d5-ae17-6189aefb49c4 to disappear Mar 10 22:02:17.322: INFO: Pod pod-configmaps-6c7d6250-577f-40d5-ae17-6189aefb49c4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:17.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1407" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3569,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:17.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:20.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2817" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":215,"skipped":3574,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:20.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 10 22:02:20.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5925' Mar 10 22:02:20.695: INFO: stderr: "" Mar 10 22:02:20.695: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846 Mar 10 22:02:20.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5925' Mar 10 22:02:26.039: INFO: stderr: "" Mar 10 22:02:26.039: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:26.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5925" for this suite. • [SLOW TEST:5.562 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":216,"skipped":3587,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:26.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:26.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6097" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":217,"skipped":3589,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:26.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 22:02:26.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-496bc8cf-6f5d-46a1-aefa-5223ada0ee08" in namespace "downward-api-6676" to be "success or failure" Mar 10 22:02:26.412: INFO: Pod "downwardapi-volume-496bc8cf-6f5d-46a1-aefa-5223ada0ee08": Phase="Pending", Reason="", readiness=false. Elapsed: 73.607193ms Mar 10 22:02:28.430: INFO: Pod "downwardapi-volume-496bc8cf-6f5d-46a1-aefa-5223ada0ee08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.092521784s STEP: Saw pod success Mar 10 22:02:28.430: INFO: Pod "downwardapi-volume-496bc8cf-6f5d-46a1-aefa-5223ada0ee08" satisfied condition "success or failure" Mar 10 22:02:28.433: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-496bc8cf-6f5d-46a1-aefa-5223ada0ee08 container client-container: STEP: delete the pod Mar 10 22:02:28.483: INFO: Waiting for pod downwardapi-volume-496bc8cf-6f5d-46a1-aefa-5223ada0ee08 to disappear Mar 10 22:02:28.494: INFO: Pod downwardapi-volume-496bc8cf-6f5d-46a1-aefa-5223ada0ee08 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:28.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6676" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3593,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:28.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 22:02:28.626: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1776881a-f2d5-4dce-9d4e-c9bff99d2943" in namespace "downward-api-6604" to be "success or failure" Mar 10 22:02:28.642: INFO: Pod "downwardapi-volume-1776881a-f2d5-4dce-9d4e-c9bff99d2943": Phase="Pending", Reason="", readiness=false. Elapsed: 15.957564ms Mar 10 22:02:30.646: INFO: Pod "downwardapi-volume-1776881a-f2d5-4dce-9d4e-c9bff99d2943": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019502804s STEP: Saw pod success Mar 10 22:02:30.646: INFO: Pod "downwardapi-volume-1776881a-f2d5-4dce-9d4e-c9bff99d2943" satisfied condition "success or failure" Mar 10 22:02:30.648: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1776881a-f2d5-4dce-9d4e-c9bff99d2943 container client-container: STEP: delete the pod Mar 10 22:02:30.663: INFO: Waiting for pod downwardapi-volume-1776881a-f2d5-4dce-9d4e-c9bff99d2943 to disappear Mar 10 22:02:30.667: INFO: Pod downwardapi-volume-1776881a-f2d5-4dce-9d4e-c9bff99d2943 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:30.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6604" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3600,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:30.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 10 22:02:30.738: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 10 22:02:30.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6079' Mar 10 22:02:31.044: INFO: stderr: "" Mar 10 22:02:31.044: INFO: stdout: "service/agnhost-slave created\n" Mar 10 22:02:31.045: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 10 22:02:31.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6079' Mar 10 22:02:31.318: INFO: stderr: "" Mar 10 22:02:31.318: INFO: stdout: "service/agnhost-master created\n" Mar 10 22:02:31.318: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 10 22:02:31.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6079' Mar 10 22:02:31.599: INFO: stderr: "" Mar 10 22:02:31.599: INFO: stdout: "service/frontend created\n" Mar 10 22:02:31.599: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 10 22:02:31.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6079' Mar 10 22:02:31.838: INFO: stderr: "" Mar 10 22:02:31.838: INFO: stdout: "deployment.apps/frontend created\n" Mar 10 22:02:31.839: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 10 22:02:31.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6079' Mar 10 22:02:32.118: INFO: stderr: "" Mar 10 22:02:32.118: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 10 22:02:32.118: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 10 22:02:32.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6079' Mar 10 22:02:32.359: INFO: stderr: "" Mar 10 22:02:32.359: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 10 22:02:32.359: INFO: Waiting for all frontend pods to be Running. Mar 10 22:02:37.409: INFO: Waiting for frontend to serve content. Mar 10 22:02:37.419: INFO: Trying to add a new entry to the guestbook. Mar 10 22:02:37.427: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 10 22:02:37.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6079' Mar 10 22:02:37.613: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 10 22:02:37.613: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 10 22:02:37.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6079' Mar 10 22:02:37.754: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 10 22:02:37.755: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 10 22:02:37.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6079' Mar 10 22:02:37.833: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 10 22:02:37.833: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 10 22:02:37.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6079' Mar 10 22:02:37.911: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 10 22:02:37.911: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 10 22:02:37.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6079' Mar 10 22:02:38.007: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 10 22:02:38.007: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 10 22:02:38.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6079' Mar 10 22:02:38.079: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 10 22:02:38.079: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:38.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6079" for this suite. • [SLOW TEST:7.423 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":220,"skipped":3605,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:38.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-3ec20b14-ee4d-4b05-b7b8-118142f80e64 STEP: Creating a pod to test consume secrets Mar 10 22:02:38.262: INFO: Waiting up to 5m0s for pod "pod-secrets-64fe0531-ad5a-48db-994d-d30c6a3fd367" in namespace "secrets-4482" to be "success or failure" Mar 10 22:02:38.304: INFO: Pod "pod-secrets-64fe0531-ad5a-48db-994d-d30c6a3fd367": Phase="Pending", Reason="", readiness=false. Elapsed: 41.941126ms Mar 10 22:02:40.307: INFO: Pod "pod-secrets-64fe0531-ad5a-48db-994d-d30c6a3fd367": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044792305s Mar 10 22:02:42.310: INFO: Pod "pod-secrets-64fe0531-ad5a-48db-994d-d30c6a3fd367": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047428616s STEP: Saw pod success Mar 10 22:02:42.310: INFO: Pod "pod-secrets-64fe0531-ad5a-48db-994d-d30c6a3fd367" satisfied condition "success or failure" Mar 10 22:02:42.311: INFO: Trying to get logs from node jerma-worker pod pod-secrets-64fe0531-ad5a-48db-994d-d30c6a3fd367 container secret-volume-test: STEP: delete the pod Mar 10 22:02:42.328: INFO: Waiting for pod pod-secrets-64fe0531-ad5a-48db-994d-d30c6a3fd367 to disappear Mar 10 22:02:42.333: INFO: Pod pod-secrets-64fe0531-ad5a-48db-994d-d30c6a3fd367 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:42.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4482" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:42.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 10 22:02:42.425: INFO: Waiting up to 5m0s for pod "pod-f52cad85-83d3-4600-a875-b8595465162f" in namespace "emptydir-6732" to be "success or failure" Mar 10 22:02:42.430: INFO: Pod "pod-f52cad85-83d3-4600-a875-b8595465162f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.87836ms Mar 10 22:02:44.435: INFO: Pod "pod-f52cad85-83d3-4600-a875-b8595465162f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009252961s STEP: Saw pod success Mar 10 22:02:44.435: INFO: Pod "pod-f52cad85-83d3-4600-a875-b8595465162f" satisfied condition "success or failure" Mar 10 22:02:44.444: INFO: Trying to get logs from node jerma-worker pod pod-f52cad85-83d3-4600-a875-b8595465162f container test-container: STEP: delete the pod Mar 10 22:02:44.497: INFO: Waiting for pod pod-f52cad85-83d3-4600-a875-b8595465162f to disappear Mar 10 22:02:44.507: INFO: Pod pod-f52cad85-83d3-4600-a875-b8595465162f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:44.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6732" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:44.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:02:46.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2259" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3671,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:02:46.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1251, will wait for the garbage collector to delete the pods Mar 10 22:02:48.752: INFO: Deleting Job.batch foo took: 5.425044ms Mar 10 22:02:48.852: INFO: Terminating Job.batch foo pods took: 100.235626ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:03:22.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1251" for this suite. • [SLOW TEST:35.734 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":224,"skipped":3689,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:03:22.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:04:22.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2202" for this suite. • [SLOW TEST:60.106 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3724,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:04:22.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 10 22:04:26.581: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6855 PodName:pod-sharedvolume-4ce8bfb8-3eb9-4caa-a7ba-56542e2da1e9 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 22:04:26.582: INFO: >>> kubeConfig: /root/.kube/config I0310 22:04:26.610228 6 log.go:172] (0xc002db64d0) (0xc000258640) Create stream I0310 22:04:26.610261 6 log.go:172] (0xc002db64d0) (0xc000258640) Stream added, broadcasting: 1 I0310 22:04:26.611880 6 log.go:172] (0xc002db64d0) Reply frame received for 1 I0310 22:04:26.611914 6 log.go:172] (0xc002db64d0) (0xc0002588c0) Create stream I0310 22:04:26.611927 6 log.go:172] (0xc002db64d0) (0xc0002588c0) Stream added, broadcasting: 3 I0310 22:04:26.612746 6 log.go:172] (0xc002db64d0) Reply frame received for 3 I0310 22:04:26.612770 6 log.go:172] (0xc002db64d0) (0xc000259180) Create stream I0310 22:04:26.612779 6 log.go:172] (0xc002db64d0) (0xc000259180) Stream added, broadcasting: 5 I0310 22:04:26.613685 6 log.go:172] (0xc002db64d0) Reply frame received for 5 I0310 22:04:26.684962 6 log.go:172] (0xc002db64d0) Data frame received for 5 I0310 22:04:26.684990 6 log.go:172] (0xc000259180) (5) Data frame handling I0310 22:04:26.685012 6 log.go:172] (0xc002db64d0) Data frame received for 3 I0310 22:04:26.685024 6 log.go:172] (0xc0002588c0) (3) Data frame handling I0310 22:04:26.685037 6 log.go:172] (0xc0002588c0) (3) Data frame sent I0310 22:04:26.685045 6 log.go:172] (0xc002db64d0) Data frame received for 3 I0310 22:04:26.685051 6 log.go:172] (0xc0002588c0) (3) Data frame handling I0310 22:04:26.686091 6 log.go:172] (0xc002db64d0) Data frame received for 1 I0310 22:04:26.686151 6 log.go:172] (0xc000258640) (1) Data frame handling I0310 22:04:26.686172 6 log.go:172] (0xc000258640) (1) Data frame sent I0310 22:04:26.686188 6 log.go:172] (0xc002db64d0) (0xc000258640) Stream removed, broadcasting: 1 I0310 22:04:26.686266 6 log.go:172] (0xc002db64d0) Go away received I0310 22:04:26.686300 6 log.go:172] (0xc002db64d0) (0xc000258640) Stream removed, broadcasting: 1 I0310 22:04:26.686323 6 log.go:172] (0xc002db64d0) (0xc0002588c0) Stream removed, broadcasting: 3 I0310 22:04:26.686333 6 log.go:172] (0xc002db64d0) (0xc000259180) Stream removed, broadcasting: 5 Mar 10 22:04:26.686: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:04:26.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6855" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":226,"skipped":3739,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:04:26.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 10 22:04:26.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3367' Mar 10 22:04:26.827: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 10 22:04:26.827: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718 Mar 10 22:04:30.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3367' Mar 10 22:04:30.986: INFO: stderr: "" Mar 10 22:04:30.986: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:04:30.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3367" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":227,"skipped":3783,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:04:30.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 10 22:04:33.057: INFO: &Pod{ObjectMeta:{send-events-9f26020b-afa5-4ae5-9483-5f09872f05f5 events-8073 /api/v1/namespaces/events-8073/pods/send-events-9f26020b-afa5-4ae5-9483-5f09872f05f5 6f774aee-2974-4894-a3ac-939f41d17eed 689101 0 2020-03-10 22:04:31 +0000 UTC map[name:foo time:32257686] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hwwxw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hwwxw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hwwxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:04:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:04:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.83,StartTime:2020-03-10 22:04:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 22:04:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://1a27e8a13ae036190a03ca884fdf952b5b93cdd09e11cb2bae26ae35a64abf43,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 10 22:04:35.061: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 10 22:04:37.065: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:04:37.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8073" for this suite. • [SLOW TEST:6.147 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":228,"skipped":3793,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:04:37.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:04:39.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4805" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3820,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:04:39.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 10 22:04:39.334: INFO: Waiting up to 5m0s for pod "downward-api-be2f87a1-2038-44ce-befa-8fc4cdb498ac" in namespace "downward-api-5356" to be "success or failure" Mar 10 22:04:39.345: INFO: Pod "downward-api-be2f87a1-2038-44ce-befa-8fc4cdb498ac": Phase="Pending", Reason="", readiness=false. Elapsed: 11.257388ms Mar 10 22:04:41.349: INFO: Pod "downward-api-be2f87a1-2038-44ce-befa-8fc4cdb498ac": Phase="Running", Reason="", readiness=true. Elapsed: 2.015035873s Mar 10 22:04:43.353: INFO: Pod "downward-api-be2f87a1-2038-44ce-befa-8fc4cdb498ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018878081s STEP: Saw pod success Mar 10 22:04:43.353: INFO: Pod "downward-api-be2f87a1-2038-44ce-befa-8fc4cdb498ac" satisfied condition "success or failure" Mar 10 22:04:43.356: INFO: Trying to get logs from node jerma-worker2 pod downward-api-be2f87a1-2038-44ce-befa-8fc4cdb498ac container dapi-container: STEP: delete the pod Mar 10 22:04:43.432: INFO: Waiting for pod downward-api-be2f87a1-2038-44ce-befa-8fc4cdb498ac to disappear Mar 10 22:04:43.439: INFO: Pod downward-api-be2f87a1-2038-44ce-befa-8fc4cdb498ac no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:04:43.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5356" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3839,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:04:43.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:04:58.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9771" for this suite. STEP: Destroying namespace "nsdeletetest-8483" for this suite. Mar 10 22:04:58.641: INFO: Namespace nsdeletetest-8483 was already deleted STEP: Destroying namespace "nsdeletetest-2952" for this suite. • [SLOW TEST:15.198 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":231,"skipped":3890,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:04:58.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-989 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-989 STEP: creating replication controller externalsvc in namespace services-989 I0310 22:04:58.854848 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-989, replica count: 2 I0310 22:05:01.905402 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 10 22:05:01.951: INFO: Creating new exec pod Mar 10 22:05:03.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-989 execpodldrx8 -- /bin/sh -x -c nslookup clusterip-service' Mar 10 22:05:04.190: INFO: stderr: "I0310 22:05:04.103629 3703 log.go:172] (0xc000aaa000) (0xc0006ce6e0) Create stream\nI0310 22:05:04.103676 3703 log.go:172] (0xc000aaa000) (0xc0006ce6e0) Stream added, broadcasting: 1\nI0310 22:05:04.105777 3703 log.go:172] (0xc000aaa000) Reply frame received for 1\nI0310 22:05:04.105820 3703 log.go:172] (0xc000aaa000) (0xc0004734a0) Create stream\nI0310 22:05:04.105837 3703 log.go:172] (0xc000aaa000) (0xc0004734a0) Stream added, broadcasting: 3\nI0310 22:05:04.106625 3703 log.go:172] (0xc000aaa000) Reply frame received for 3\nI0310 22:05:04.106648 3703 log.go:172] (0xc000aaa000) (0xc000906000) Create stream\nI0310 22:05:04.106657 3703 log.go:172] (0xc000aaa000) (0xc000906000) Stream added, broadcasting: 5\nI0310 22:05:04.107340 3703 log.go:172] (0xc000aaa000) Reply frame received for 5\nI0310 22:05:04.173304 3703 log.go:172] (0xc000aaa000) Data frame received for 5\nI0310 22:05:04.173331 3703 log.go:172] (0xc000906000) (5) Data frame handling\nI0310 22:05:04.173350 3703 log.go:172] (0xc000906000) (5) Data frame sent\n+ nslookup clusterip-service\nI0310 22:05:04.182694 3703 log.go:172] (0xc000aaa000) Data frame received for 3\nI0310 22:05:04.182715 3703 log.go:172] (0xc0004734a0) (3) Data frame handling\nI0310 22:05:04.182730 3703 log.go:172] (0xc0004734a0) (3) Data frame sent\nI0310 22:05:04.184228 3703 log.go:172] (0xc000aaa000) Data frame received for 3\nI0310 22:05:04.184263 3703 log.go:172] (0xc0004734a0) (3) Data frame handling\nI0310 22:05:04.184284 3703 log.go:172] (0xc0004734a0) (3) Data frame sent\nI0310 22:05:04.184841 3703 log.go:172] (0xc000aaa000) Data frame received for 5\nI0310 22:05:04.184857 3703 log.go:172] (0xc000906000) (5) Data frame handling\nI0310 22:05:04.184877 3703 log.go:172] (0xc000aaa000) Data frame received for 3\nI0310 22:05:04.184887 3703 log.go:172] (0xc0004734a0) (3) Data frame handling\nI0310 22:05:04.186634 3703 log.go:172] (0xc000aaa000) Data frame received for 1\nI0310 22:05:04.186687 3703 log.go:172] (0xc0006ce6e0) (1) Data frame handling\nI0310 22:05:04.186714 3703 log.go:172] (0xc0006ce6e0) (1) Data frame sent\nI0310 22:05:04.186737 3703 log.go:172] (0xc000aaa000) (0xc0006ce6e0) Stream removed, broadcasting: 1\nI0310 22:05:04.186782 3703 log.go:172] (0xc000aaa000) Go away received\nI0310 22:05:04.187215 3703 log.go:172] (0xc000aaa000) (0xc0006ce6e0) Stream removed, broadcasting: 1\nI0310 22:05:04.187231 3703 log.go:172] (0xc000aaa000) (0xc0004734a0) Stream removed, broadcasting: 3\nI0310 22:05:04.187238 3703 log.go:172] (0xc000aaa000) (0xc000906000) Stream removed, broadcasting: 5\n" Mar 10 22:05:04.190: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-989.svc.cluster.local\tcanonical name = externalsvc.services-989.svc.cluster.local.\nName:\texternalsvc.services-989.svc.cluster.local\nAddress: 10.104.5.189\n\n" STEP: deleting ReplicationController externalsvc in namespace services-989, will wait for the garbage collector to delete the pods Mar 10 22:05:04.248: INFO: Deleting ReplicationController externalsvc took: 4.40921ms Mar 10 22:05:04.549: INFO: Terminating ReplicationController externalsvc pods took: 300.254419ms Mar 10 22:05:08.784: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:05:08.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-989" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.224 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":232,"skipped":3899,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:05:08.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 10 22:05:09.022: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 10 22:05:14.026: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:05:14.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-848" for this suite. • [SLOW TEST:5.302 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":233,"skipped":3903,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:05:14.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-2172/configmap-test-32be3a04-8434-449e-a05e-30b8f7bb99cb STEP: Creating a pod to test consume configMaps Mar 10 22:05:14.274: INFO: Waiting up to 5m0s for pod "pod-configmaps-326b55cf-3dd0-45c7-a68d-ed32bf973b18" in namespace "configmap-2172" to be "success or failure" Mar 10 22:05:14.279: INFO: Pod "pod-configmaps-326b55cf-3dd0-45c7-a68d-ed32bf973b18": Phase="Pending", Reason="", readiness=false. Elapsed: 5.12592ms Mar 10 22:05:16.282: INFO: Pod "pod-configmaps-326b55cf-3dd0-45c7-a68d-ed32bf973b18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008489916s STEP: Saw pod success Mar 10 22:05:16.282: INFO: Pod "pod-configmaps-326b55cf-3dd0-45c7-a68d-ed32bf973b18" satisfied condition "success or failure" Mar 10 22:05:16.285: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-326b55cf-3dd0-45c7-a68d-ed32bf973b18 container env-test: STEP: delete the pod Mar 10 22:05:16.298: INFO: Waiting for pod pod-configmaps-326b55cf-3dd0-45c7-a68d-ed32bf973b18 to disappear Mar 10 22:05:16.302: INFO: Pod pod-configmaps-326b55cf-3dd0-45c7-a68d-ed32bf973b18 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:05:16.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2172" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3911,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:05:16.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:05:27.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6532" for this suite. • [SLOW TEST:11.196 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":235,"skipped":3944,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:05:27.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 10 22:05:27.576: INFO: Waiting up to 5m0s for pod "pod-99b2d19c-c849-4f49-aca5-a007dbc42aed" in namespace "emptydir-371" to be "success or failure" Mar 10 22:05:27.580: INFO: Pod "pod-99b2d19c-c849-4f49-aca5-a007dbc42aed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.916435ms Mar 10 22:05:29.599: INFO: Pod "pod-99b2d19c-c849-4f49-aca5-a007dbc42aed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023543252s STEP: Saw pod success Mar 10 22:05:29.599: INFO: Pod "pod-99b2d19c-c849-4f49-aca5-a007dbc42aed" satisfied condition "success or failure" Mar 10 22:05:29.602: INFO: Trying to get logs from node jerma-worker pod pod-99b2d19c-c849-4f49-aca5-a007dbc42aed container test-container: STEP: delete the pod Mar 10 22:05:29.646: INFO: Waiting for pod pod-99b2d19c-c849-4f49-aca5-a007dbc42aed to disappear Mar 10 22:05:29.658: INFO: Pod pod-99b2d19c-c849-4f49-aca5-a007dbc42aed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:05:29.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-371" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3959,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:05:29.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:05:31.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7697" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3961,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:05:31.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-9f751dd0-8109-4bcd-a787-99566d9e08e5 in namespace container-probe-2097 Mar 10 22:05:33.920: INFO: Started pod busybox-9f751dd0-8109-4bcd-a787-99566d9e08e5 in namespace container-probe-2097 STEP: checking the pod's current state and verifying that restartCount is present Mar 10 22:05:33.923: INFO: Initial restart count of pod busybox-9f751dd0-8109-4bcd-a787-99566d9e08e5 is 0 Mar 10 22:06:22.045: INFO: Restart count of pod container-probe-2097/busybox-9f751dd0-8109-4bcd-a787-99566d9e08e5 is now 1 (48.122251423s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:06:22.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2097" for this suite. • [SLOW TEST:50.281 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":4005,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:06:22.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:06:22.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6945" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":239,"skipped":4018,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:06:22.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-8847/configmap-test-38439113-e52e-4473-8927-15246ab4b57c STEP: Creating a pod to test consume configMaps Mar 10 22:06:22.372: INFO: Waiting up to 5m0s for pod "pod-configmaps-845c820d-21d9-4e9a-bf28-3e61e7630714" in namespace "configmap-8847" to be "success or failure" Mar 10 22:06:22.395: INFO: Pod "pod-configmaps-845c820d-21d9-4e9a-bf28-3e61e7630714": Phase="Pending", Reason="", readiness=false. Elapsed: 22.601096ms Mar 10 22:06:24.398: INFO: Pod "pod-configmaps-845c820d-21d9-4e9a-bf28-3e61e7630714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025683098s STEP: Saw pod success Mar 10 22:06:24.398: INFO: Pod "pod-configmaps-845c820d-21d9-4e9a-bf28-3e61e7630714" satisfied condition "success or failure" Mar 10 22:06:24.400: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-845c820d-21d9-4e9a-bf28-3e61e7630714 container env-test: STEP: delete the pod Mar 10 22:06:24.433: INFO: Waiting for pod pod-configmaps-845c820d-21d9-4e9a-bf28-3e61e7630714 to disappear Mar 10 22:06:24.441: INFO: Pod pod-configmaps-845c820d-21d9-4e9a-bf28-3e61e7630714 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:06:24.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8847" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4067,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:06:24.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 22:06:24.519: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:06:30.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-235" for this suite. • [SLOW TEST:6.227 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":241,"skipped":4080,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:06:30.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3707 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3707;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3707 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3707;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3707.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3707.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3707.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3707.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3707.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3707.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3707.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3707.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3707.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3707.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3707.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 17.243.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.243.17_udp@PTR;check="$$(dig +tcp +noall +answer +search 17.243.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.243.17_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3707 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3707;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3707 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3707;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3707.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3707.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3707.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3707.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3707.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3707.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3707.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3707.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3707.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3707.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3707.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3707.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 17.243.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.243.17_udp@PTR;check="$$(dig +tcp +noall +answer +search 17.243.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.243.17_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 10 22:06:34.831: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.835: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.838: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.840: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.842: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.845: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.847: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.850: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.866: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.868: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.870: INFO: Unable to read jessie_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.872: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.875: INFO: Unable to read jessie_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.877: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.879: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.882: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:34.895: INFO: Lookups using dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3707 wheezy_tcp@dns-test-service.dns-3707 wheezy_udp@dns-test-service.dns-3707.svc wheezy_tcp@dns-test-service.dns-3707.svc wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3707 jessie_tcp@dns-test-service.dns-3707 jessie_udp@dns-test-service.dns-3707.svc jessie_tcp@dns-test-service.dns-3707.svc jessie_udp@_http._tcp.dns-test-service.dns-3707.svc jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc] Mar 10 22:06:39.899: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.902: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.908: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.911: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.913: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.916: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.919: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.941: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.943: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.946: INFO: Unable to read jessie_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.949: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.951: INFO: Unable to read jessie_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.954: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.958: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.961: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:39.978: INFO: Lookups using dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3707 wheezy_tcp@dns-test-service.dns-3707 wheezy_udp@dns-test-service.dns-3707.svc wheezy_tcp@dns-test-service.dns-3707.svc wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3707 jessie_tcp@dns-test-service.dns-3707 jessie_udp@dns-test-service.dns-3707.svc jessie_tcp@dns-test-service.dns-3707.svc jessie_udp@_http._tcp.dns-test-service.dns-3707.svc jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc] Mar 10 22:06:44.900: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.903: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.909: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.911: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.914: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.917: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.920: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.938: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.940: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.942: INFO: Unable to read jessie_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.944: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.946: INFO: Unable to read jessie_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.949: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.952: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.954: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:44.975: INFO: Lookups using dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3707 wheezy_tcp@dns-test-service.dns-3707 wheezy_udp@dns-test-service.dns-3707.svc wheezy_tcp@dns-test-service.dns-3707.svc wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3707 jessie_tcp@dns-test-service.dns-3707 jessie_udp@dns-test-service.dns-3707.svc jessie_tcp@dns-test-service.dns-3707.svc jessie_udp@_http._tcp.dns-test-service.dns-3707.svc jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc] Mar 10 22:06:49.900: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.904: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.907: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.910: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.913: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.917: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.920: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.923: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.949: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.951: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.954: INFO: Unable to read jessie_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.957: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.959: INFO: Unable to read jessie_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.962: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.965: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.967: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:49.991: INFO: Lookups using dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3707 wheezy_tcp@dns-test-service.dns-3707 wheezy_udp@dns-test-service.dns-3707.svc wheezy_tcp@dns-test-service.dns-3707.svc wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3707 jessie_tcp@dns-test-service.dns-3707 jessie_udp@dns-test-service.dns-3707.svc jessie_tcp@dns-test-service.dns-3707.svc jessie_udp@_http._tcp.dns-test-service.dns-3707.svc jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc] Mar 10 22:06:54.899: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.901: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.904: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.906: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.910: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.912: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.915: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.917: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.939: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.941: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.944: INFO: Unable to read jessie_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.946: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.948: INFO: Unable to read jessie_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.955: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.957: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.959: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:54.971: INFO: Lookups using dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3707 wheezy_tcp@dns-test-service.dns-3707 wheezy_udp@dns-test-service.dns-3707.svc wheezy_tcp@dns-test-service.dns-3707.svc wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3707 jessie_tcp@dns-test-service.dns-3707 jessie_udp@dns-test-service.dns-3707.svc jessie_tcp@dns-test-service.dns-3707.svc jessie_udp@_http._tcp.dns-test-service.dns-3707.svc jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc] Mar 10 22:06:59.899: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.902: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.909: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.913: INFO: Unable to read wheezy_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.916: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.919: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.923: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.948: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.950: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.953: INFO: Unable to read jessie_udp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.956: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707 from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.959: INFO: Unable to read jessie_udp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.961: INFO: Unable to read jessie_tcp@dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.964: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.967: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc from pod dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73: the server could not find the requested resource (get pods dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73) Mar 10 22:06:59.985: INFO: Lookups using dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3707 wheezy_tcp@dns-test-service.dns-3707 wheezy_udp@dns-test-service.dns-3707.svc wheezy_tcp@dns-test-service.dns-3707.svc wheezy_udp@_http._tcp.dns-test-service.dns-3707.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3707.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3707 jessie_tcp@dns-test-service.dns-3707 jessie_udp@dns-test-service.dns-3707.svc jessie_tcp@dns-test-service.dns-3707.svc jessie_udp@_http._tcp.dns-test-service.dns-3707.svc jessie_tcp@_http._tcp.dns-test-service.dns-3707.svc] Mar 10 22:07:05.017: INFO: DNS probes using dns-3707/dns-test-9e8386a3-b521-4b58-9831-bb1135ed4b73 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:07:05.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3707" for this suite. • [SLOW TEST:34.622 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":242,"skipped":4096,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:07:05.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 10 22:07:07.369: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:07:07.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2764" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4099,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:07:07.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 22:07:07.464: INFO: Creating deployment "test-recreate-deployment" Mar 10 22:07:07.467: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 10 22:07:07.497: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 10 22:07:09.503: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 10 22:07:09.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719474827, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719474827, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719474827, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719474827, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 10 22:07:11.510: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 10 22:07:11.516: INFO: Updating deployment test-recreate-deployment Mar 10 22:07:11.516: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 10 22:07:11.719: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4547 /apis/apps/v1/namespaces/deployment-4547/deployments/test-recreate-deployment 1931fe3b-b5a5-4c1c-9fa7-71e54ec45756 690169 2 2020-03-10 22:07:07 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027df628 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-10 22:07:11 +0000 UTC,LastTransitionTime:2020-03-10 22:07:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-10 22:07:11 +0000 UTC,LastTransitionTime:2020-03-10 22:07:07 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 10 22:07:11.723: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4547 /apis/apps/v1/namespaces/deployment-4547/replicasets/test-recreate-deployment-5f94c574ff 41480ba2-16bf-43b7-9ae0-ddc0bf33e890 690167 1 2020-03-10 22:07:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1931fe3b-b5a5-4c1c-9fa7-71e54ec45756 0xc0022c1307 0xc0022c1308}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022c13e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 10 22:07:11.723: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 10 22:07:11.723: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-4547 /apis/apps/v1/namespaces/deployment-4547/replicasets/test-recreate-deployment-799c574856 2873b723-8b67-4768-a77c-fec2c43b2593 690156 2 2020-03-10 22:07:07 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1931fe3b-b5a5-4c1c-9fa7-71e54ec45756 0xc0022c1457 0xc0022c1458}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022c14c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 10 22:07:11.726: INFO: Pod "test-recreate-deployment-5f94c574ff-84hwc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-84hwc test-recreate-deployment-5f94c574ff- deployment-4547 /api/v1/namespaces/deployment-4547/pods/test-recreate-deployment-5f94c574ff-84hwc f6b5f134-16d8-481f-8f7d-de518808cb35 690168 0 2020-03-10 22:07:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 41480ba2-16bf-43b7-9ae0-ddc0bf33e890 0xc0022c1967 0xc0022c1968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7nktl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7nktl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7nktl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:07:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:07:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:07:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:07:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-10 22:07:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:07:11.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4547" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":244,"skipped":4114,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:07:11.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:07:27.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6690" for this suite. • [SLOW TEST:16.120 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":245,"skipped":4124,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:07:27.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 10 22:07:27.917: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:07:31.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9512" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":246,"skipped":4137,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:07:31.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 10 22:07:31.709: INFO: Waiting up to 5m0s for pod "pod-67bfa334-66bc-4427-aeb9-8065cae98594" in namespace "emptydir-1293" to be "success or failure" Mar 10 22:07:31.723: INFO: Pod "pod-67bfa334-66bc-4427-aeb9-8065cae98594": Phase="Pending", Reason="", readiness=false. Elapsed: 14.250553ms Mar 10 22:07:33.738: INFO: Pod "pod-67bfa334-66bc-4427-aeb9-8065cae98594": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029291404s STEP: Saw pod success Mar 10 22:07:33.738: INFO: Pod "pod-67bfa334-66bc-4427-aeb9-8065cae98594" satisfied condition "success or failure" Mar 10 22:07:33.741: INFO: Trying to get logs from node jerma-worker pod pod-67bfa334-66bc-4427-aeb9-8065cae98594 container test-container: STEP: delete the pod Mar 10 22:07:33.774: INFO: Waiting for pod pod-67bfa334-66bc-4427-aeb9-8065cae98594 to disappear Mar 10 22:07:33.782: INFO: Pod pod-67bfa334-66bc-4427-aeb9-8065cae98594 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:07:33.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1293" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4141,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:07:33.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 10 22:07:36.426: INFO: Successfully updated pod "labelsupdate8d3173d2-c09e-4cb8-95e1-79bbc431d88d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:07:38.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3207" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4153,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:07:38.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 10 22:07:41.056: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4597 pod-service-account-dfde3f3e-7683-4ecb-89fb-b75c75a7acbf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 10 22:07:41.233: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4597 pod-service-account-dfde3f3e-7683-4ecb-89fb-b75c75a7acbf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 10 22:07:41.396: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4597 pod-service-account-dfde3f3e-7683-4ecb-89fb-b75c75a7acbf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:07:41.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4597" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":249,"skipped":4155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:07:41.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 10 22:07:41.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0734f054-4d1b-426f-bc41-d0f69209e34e" in namespace "downward-api-2001" to be "success or failure" Mar 10 22:07:41.615: INFO: Pod "downwardapi-volume-0734f054-4d1b-426f-bc41-d0f69209e34e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.643004ms Mar 10 22:07:43.618: INFO: Pod "downwardapi-volume-0734f054-4d1b-426f-bc41-d0f69209e34e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00817798s STEP: Saw pod success Mar 10 22:07:43.618: INFO: Pod "downwardapi-volume-0734f054-4d1b-426f-bc41-d0f69209e34e" satisfied condition "success or failure" Mar 10 22:07:43.622: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0734f054-4d1b-426f-bc41-d0f69209e34e container client-container: STEP: delete the pod Mar 10 22:07:43.640: INFO: Waiting for pod downwardapi-volume-0734f054-4d1b-426f-bc41-d0f69209e34e to disappear Mar 10 22:07:43.644: INFO: Pod downwardapi-volume-0734f054-4d1b-426f-bc41-d0f69209e34e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:07:43.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2001" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4185,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:07:43.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 10 22:07:43.745: INFO: Waiting up to 5m0s for pod "pod-875cc0e9-57a4-4ec6-b5f4-aa36d240144a" in namespace "emptydir-1240" to be "success or failure" Mar 10 22:07:43.791: INFO: Pod "pod-875cc0e9-57a4-4ec6-b5f4-aa36d240144a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.220844ms Mar 10 22:07:45.795: INFO: Pod "pod-875cc0e9-57a4-4ec6-b5f4-aa36d240144a": Phase="Running", Reason="", readiness=true. Elapsed: 2.050118685s Mar 10 22:07:47.799: INFO: Pod "pod-875cc0e9-57a4-4ec6-b5f4-aa36d240144a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053988317s STEP: Saw pod success Mar 10 22:07:47.799: INFO: Pod "pod-875cc0e9-57a4-4ec6-b5f4-aa36d240144a" satisfied condition "success or failure" Mar 10 22:07:47.802: INFO: Trying to get logs from node jerma-worker2 pod pod-875cc0e9-57a4-4ec6-b5f4-aa36d240144a container test-container: STEP: delete the pod Mar 10 22:07:47.823: INFO: Waiting for pod pod-875cc0e9-57a4-4ec6-b5f4-aa36d240144a to disappear Mar 10 22:07:47.827: INFO: Pod pod-875cc0e9-57a4-4ec6-b5f4-aa36d240144a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:07:47.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1240" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4197,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:07:47.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 10 22:07:47.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 10 22:07:47.977: INFO: stderr: "" Mar 10 22:07:47.977: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:07:47.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5087" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":252,"skipped":4199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:07:47.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0310 22:08:28.109437 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 10 22:08:28.109: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:08:28.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2856" for this suite. • [SLOW TEST:40.132 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":253,"skipped":4225,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:08:28.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 10 22:08:30.706: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0462e851-6d8a-41b7-b52c-5ab91a02b274" Mar 10 22:08:30.706: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0462e851-6d8a-41b7-b52c-5ab91a02b274" in namespace "pods-9138" to be "terminated due to deadline exceeded" Mar 10 22:08:30.731: INFO: Pod "pod-update-activedeadlineseconds-0462e851-6d8a-41b7-b52c-5ab91a02b274": Phase="Running", Reason="", readiness=true. Elapsed: 24.183183ms Mar 10 22:08:32.734: INFO: Pod "pod-update-activedeadlineseconds-0462e851-6d8a-41b7-b52c-5ab91a02b274": Phase="Running", Reason="", readiness=true. Elapsed: 2.02768611s Mar 10 22:08:34.738: INFO: Pod "pod-update-activedeadlineseconds-0462e851-6d8a-41b7-b52c-5ab91a02b274": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.031336042s Mar 10 22:08:34.738: INFO: Pod "pod-update-activedeadlineseconds-0462e851-6d8a-41b7-b52c-5ab91a02b274" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:08:34.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9138" for this suite. • [SLOW TEST:6.626 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4243,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:08:34.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-083ff407-d9c2-40cc-82c2-a6ad90393d5c STEP: Creating a pod to test consume configMaps Mar 10 22:08:34.806: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea5d5196-015f-44bd-8bfb-8b9d54cc2ac0" in namespace "projected-2358" to be "success or failure" Mar 10 22:08:34.852: INFO: Pod "pod-projected-configmaps-ea5d5196-015f-44bd-8bfb-8b9d54cc2ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 46.012688ms Mar 10 22:08:36.855: INFO: Pod "pod-projected-configmaps-ea5d5196-015f-44bd-8bfb-8b9d54cc2ac0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.049140206s STEP: Saw pod success Mar 10 22:08:36.855: INFO: Pod "pod-projected-configmaps-ea5d5196-015f-44bd-8bfb-8b9d54cc2ac0" satisfied condition "success or failure" Mar 10 22:08:36.858: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-ea5d5196-015f-44bd-8bfb-8b9d54cc2ac0 container projected-configmap-volume-test: STEP: delete the pod Mar 10 22:08:36.878: INFO: Waiting for pod pod-projected-configmaps-ea5d5196-015f-44bd-8bfb-8b9d54cc2ac0 to disappear Mar 10 22:08:36.930: INFO: Pod pod-projected-configmaps-ea5d5196-015f-44bd-8bfb-8b9d54cc2ac0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:08:36.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2358" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4261,"failed":0} S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:08:36.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 10 22:08:39.564: INFO: Successfully updated pod "annotationupdate4b440695-75cd-4850-b629-e13b4410bef5" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:08:41.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1291" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4262,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:08:41.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3809 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 10 22:08:41.660: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 10 22:09:04.424: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.75:8080/dial?request=hostname&protocol=http&host=10.244.2.102&port=8080&tries=1'] Namespace:pod-network-test-3809 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 22:09:04.424: INFO: >>> kubeConfig: /root/.kube/config I0310 22:09:04.457834 6 log.go:172] (0xc0027840b0) (0xc00221e960) Create stream I0310 22:09:04.457870 6 log.go:172] (0xc0027840b0) (0xc00221e960) Stream added, broadcasting: 1 I0310 22:09:04.460674 6 log.go:172] (0xc0027840b0) Reply frame received for 1 I0310 22:09:04.460717 6 log.go:172] (0xc0027840b0) (0xc00160cf00) Create stream I0310 22:09:04.460728 6 log.go:172] (0xc0027840b0) (0xc00160cf00) Stream added, broadcasting: 3 I0310 22:09:04.461645 6 log.go:172] (0xc0027840b0) Reply frame received for 3 I0310 22:09:04.461685 6 log.go:172] (0xc0027840b0) (0xc0009be280) Create stream I0310 22:09:04.461700 6 log.go:172] (0xc0027840b0) (0xc0009be280) Stream added, broadcasting: 5 I0310 22:09:04.462697 6 log.go:172] (0xc0027840b0) Reply frame received for 5 I0310 22:09:04.533096 6 log.go:172] (0xc0027840b0) Data frame received for 3 I0310 22:09:04.533130 6 log.go:172] (0xc00160cf00) (3) Data frame handling I0310 22:09:04.533144 6 log.go:172] (0xc00160cf00) (3) Data frame sent I0310 22:09:04.533312 6 log.go:172] (0xc0027840b0) Data frame received for 5 I0310 22:09:04.533336 6 log.go:172] (0xc0009be280) (5) Data frame handling I0310 22:09:04.533365 6 log.go:172] (0xc0027840b0) Data frame received for 3 I0310 22:09:04.533374 6 log.go:172] (0xc00160cf00) (3) Data frame handling I0310 22:09:04.535042 6 log.go:172] (0xc0027840b0) Data frame received for 1 I0310 22:09:04.535063 6 log.go:172] (0xc00221e960) (1) Data frame handling I0310 22:09:04.535080 6 log.go:172] (0xc00221e960) (1) Data frame sent I0310 22:09:04.535096 6 log.go:172] (0xc0027840b0) (0xc00221e960) Stream removed, broadcasting: 1 I0310 22:09:04.535114 6 log.go:172] (0xc0027840b0) Go away received I0310 22:09:04.535226 6 log.go:172] (0xc0027840b0) (0xc00221e960) Stream removed, broadcasting: 1 I0310 22:09:04.535244 6 log.go:172] (0xc0027840b0) (0xc00160cf00) Stream removed, broadcasting: 3 I0310 22:09:04.535258 6 log.go:172] (0xc0027840b0) (0xc0009be280) Stream removed, broadcasting: 5 Mar 10 22:09:04.535: INFO: Waiting for responses: map[] Mar 10 22:09:04.538: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.75:8080/dial?request=hostname&protocol=http&host=10.244.1.74&port=8080&tries=1'] Namespace:pod-network-test-3809 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 10 22:09:04.538: INFO: >>> kubeConfig: /root/.kube/config I0310 22:09:04.565396 6 log.go:172] (0xc0022f1550) (0xc0002d5e00) Create stream I0310 22:09:04.565414 6 log.go:172] (0xc0022f1550) (0xc0002d5e00) Stream added, broadcasting: 1 I0310 22:09:04.568362 6 log.go:172] (0xc0022f1550) Reply frame received for 1 I0310 22:09:04.568399 6 log.go:172] (0xc0022f1550) (0xc0002d5ea0) Create stream I0310 22:09:04.568421 6 log.go:172] (0xc0022f1550) (0xc0002d5ea0) Stream added, broadcasting: 3 I0310 22:09:04.570690 6 log.go:172] (0xc0022f1550) Reply frame received for 3 I0310 22:09:04.570747 6 log.go:172] (0xc0022f1550) (0xc00160d040) Create stream I0310 22:09:04.570763 6 log.go:172] (0xc0022f1550) (0xc00160d040) Stream added, broadcasting: 5 I0310 22:09:04.572796 6 log.go:172] (0xc0022f1550) Reply frame received for 5 I0310 22:09:04.642575 6 log.go:172] (0xc0022f1550) Data frame received for 3 I0310 22:09:04.642597 6 log.go:172] (0xc0002d5ea0) (3) Data frame handling I0310 22:09:04.642613 6 log.go:172] (0xc0002d5ea0) (3) Data frame sent I0310 22:09:04.643360 6 log.go:172] (0xc0022f1550) Data frame received for 5 I0310 22:09:04.643398 6 log.go:172] (0xc00160d040) (5) Data frame handling I0310 22:09:04.643455 6 log.go:172] (0xc0022f1550) Data frame received for 3 I0310 22:09:04.643485 6 log.go:172] (0xc0002d5ea0) (3) Data frame handling I0310 22:09:04.644754 6 log.go:172] (0xc0022f1550) Data frame received for 1 I0310 22:09:04.644774 6 log.go:172] (0xc0002d5e00) (1) Data frame handling I0310 22:09:04.644784 6 log.go:172] (0xc0002d5e00) (1) Data frame sent I0310 22:09:04.644797 6 log.go:172] (0xc0022f1550) (0xc0002d5e00) Stream removed, broadcasting: 1 I0310 22:09:04.644813 6 log.go:172] (0xc0022f1550) Go away received I0310 22:09:04.644962 6 log.go:172] (0xc0022f1550) (0xc0002d5e00) Stream removed, broadcasting: 1 I0310 22:09:04.644985 6 log.go:172] (0xc0022f1550) (0xc0002d5ea0) Stream removed, broadcasting: 3 I0310 22:09:04.644996 6 log.go:172] (0xc0022f1550) (0xc00160d040) Stream removed, broadcasting: 5 Mar 10 22:09:04.645: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:09:04.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3809" for this suite. • [SLOW TEST:23.042 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4282,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:09:04.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 22:09:04.723: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 10 22:09:04.739: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 10 22:09:09.743: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 10 22:09:09.744: INFO: Creating deployment "test-rolling-update-deployment" Mar 10 22:09:09.749: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 10 22:09:09.761: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 10 22:09:11.769: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 10 22:09:11.771: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 10 22:09:11.780: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5844 /apis/apps/v1/namespaces/deployment-5844/deployments/test-rolling-update-deployment a01c0f85-54f2-440f-bbe6-17620899365d 691136 1 2020-03-10 22:09:09 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00293f608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-10 22:09:09 +0000 UTC,LastTransitionTime:2020-03-10 22:09:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-10 22:09:11 +0000 UTC,LastTransitionTime:2020-03-10 22:09:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 10 22:09:11.783: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-5844 /apis/apps/v1/namespaces/deployment-5844/replicasets/test-rolling-update-deployment-67cf4f6444 b62959be-309b-4a62-b8c7-46f6253a92d3 691125 1 2020-03-10 22:09:09 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment a01c0f85-54f2-440f-bbe6-17620899365d 0xc001d4f657 0xc001d4f658}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001d4f6c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 10 22:09:11.783: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 10 22:09:11.783: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5844 /apis/apps/v1/namespaces/deployment-5844/replicasets/test-rolling-update-controller dc7ad4db-202f-4f16-ad25-388ef44eea1d 691135 2 2020-03-10 22:09:04 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment a01c0f85-54f2-440f-bbe6-17620899365d 0xc001d4f587 0xc001d4f588}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001d4f5e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 10 22:09:11.786: INFO: Pod "test-rolling-update-deployment-67cf4f6444-8jrrs" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-8jrrs test-rolling-update-deployment-67cf4f6444- deployment-5844 /api/v1/namespaces/deployment-5844/pods/test-rolling-update-deployment-67cf4f6444-8jrrs 2fbb67a2-bb70-4a1d-bdc3-86acf9c41f4a 691124 0 2020-03-10 22:09:09 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 b62959be-309b-4a62-b8c7-46f6253a92d3 0xc0028063a7 0xc0028063a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l9hk2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l9hk2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l9hk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:09:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:09:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.76,StartTime:2020-03-10 22:09:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 22:09:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://04755bfbafaf6d889a745c546a2225e7744b22137c1c6668041f7957f11393b3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.76,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:09:11.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5844" for this suite. • [SLOW TEST:7.139 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":258,"skipped":4299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:09:11.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0b3eff08-d196-45e0-ab8e-628073f605dc STEP: Creating a pod to test consume secrets Mar 10 22:09:11.973: INFO: Waiting up to 5m0s for pod "pod-secrets-f560a31c-e625-424f-a921-81c51cce7962" in namespace "secrets-852" to be "success or failure" Mar 10 22:09:11.985: INFO: Pod "pod-secrets-f560a31c-e625-424f-a921-81c51cce7962": Phase="Pending", Reason="", readiness=false. Elapsed: 11.654846ms Mar 10 22:09:13.989: INFO: Pod "pod-secrets-f560a31c-e625-424f-a921-81c51cce7962": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015513806s STEP: Saw pod success Mar 10 22:09:13.989: INFO: Pod "pod-secrets-f560a31c-e625-424f-a921-81c51cce7962" satisfied condition "success or failure" Mar 10 22:09:13.992: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-f560a31c-e625-424f-a921-81c51cce7962 container secret-volume-test: STEP: delete the pod Mar 10 22:09:14.011: INFO: Waiting for pod pod-secrets-f560a31c-e625-424f-a921-81c51cce7962 to disappear Mar 10 22:09:14.015: INFO: Pod pod-secrets-f560a31c-e625-424f-a921-81c51cce7962 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:09:14.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-852" for this suite. STEP: Destroying namespace "secret-namespace-7083" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4324,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:09:14.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 10 22:09:14.104: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:09:29.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2877" for this suite. • [SLOW TEST:15.048 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":260,"skipped":4325,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:09:29.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 10 22:09:29.516: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 10 22:09:31.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719474969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719474969, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719474969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719474969, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 22:09:34.557: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 22:09:34.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:09:35.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7057" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.975 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":261,"skipped":4335,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:09:36.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6dc050d9-5257-48b8-8667-929c754ad160 STEP: Creating a pod to test consume configMaps Mar 10 22:09:36.142: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-af6dff4d-6311-4c06-90d3-8a3dabf089bf" in namespace "projected-8851" to be "success or failure" Mar 10 22:09:36.153: INFO: Pod "pod-projected-configmaps-af6dff4d-6311-4c06-90d3-8a3dabf089bf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.358674ms Mar 10 22:09:38.170: INFO: Pod "pod-projected-configmaps-af6dff4d-6311-4c06-90d3-8a3dabf089bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027889391s Mar 10 22:09:40.173: INFO: Pod "pod-projected-configmaps-af6dff4d-6311-4c06-90d3-8a3dabf089bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031314275s STEP: Saw pod success Mar 10 22:09:40.173: INFO: Pod "pod-projected-configmaps-af6dff4d-6311-4c06-90d3-8a3dabf089bf" satisfied condition "success or failure" Mar 10 22:09:40.176: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-af6dff4d-6311-4c06-90d3-8a3dabf089bf container projected-configmap-volume-test: STEP: delete the pod Mar 10 22:09:40.202: INFO: Waiting for pod pod-projected-configmaps-af6dff4d-6311-4c06-90d3-8a3dabf089bf to disappear Mar 10 22:09:40.212: INFO: Pod pod-projected-configmaps-af6dff4d-6311-4c06-90d3-8a3dabf089bf no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:09:40.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8851" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:09:40.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 10 22:09:40.280: INFO: >>> kubeConfig: /root/.kube/config Mar 10 22:09:42.210: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:09:53.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1310" for this suite. • [SLOW TEST:13.157 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":263,"skipped":4379,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:09:53.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 22:09:53.455: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 10 22:09:56.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3183 create -f -' Mar 10 22:09:58.281: INFO: stderr: "" Mar 10 22:09:58.281: INFO: stdout: "e2e-test-crd-publish-openapi-2553-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 10 22:09:58.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3183 delete e2e-test-crd-publish-openapi-2553-crds test-cr' Mar 10 22:09:58.392: INFO: stderr: "" Mar 10 22:09:58.392: INFO: stdout: "e2e-test-crd-publish-openapi-2553-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 10 22:09:58.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3183 apply -f -' Mar 10 22:09:58.691: INFO: stderr: "" Mar 10 22:09:58.691: INFO: stdout: "e2e-test-crd-publish-openapi-2553-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 10 22:09:58.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3183 delete e2e-test-crd-publish-openapi-2553-crds test-cr' Mar 10 22:09:58.771: INFO: stderr: "" Mar 10 22:09:58.771: INFO: stdout: "e2e-test-crd-publish-openapi-2553-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 10 22:09:58.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2553-crds' Mar 10 22:09:59.002: INFO: stderr: "" Mar 10 22:09:59.002: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2553-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:10:01.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3183" for this suite. • [SLOW TEST:8.403 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":264,"skipped":4387,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:10:01.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 22:10:01.846: INFO: Create a RollingUpdate DaemonSet Mar 10 22:10:01.848: INFO: Check that daemon pods launch on every node of the cluster Mar 10 22:10:01.853: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:10:01.858: INFO: Number of nodes with available pods: 0 Mar 10 22:10:01.858: INFO: Node jerma-worker is running more than one daemon pod Mar 10 22:10:02.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:10:02.866: INFO: Number of nodes with available pods: 0 Mar 10 22:10:02.866: INFO: Node jerma-worker is running more than one daemon pod Mar 10 22:10:03.862: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:10:03.866: INFO: Number of nodes with available pods: 1 Mar 10 22:10:03.866: INFO: Node jerma-worker is running more than one daemon pod Mar 10 22:10:04.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:10:04.866: INFO: Number of nodes with available pods: 2 Mar 10 22:10:04.866: INFO: Number of running nodes: 2, number of available pods: 2 Mar 10 22:10:04.866: INFO: Update the DaemonSet to trigger a rollout Mar 10 22:10:04.872: INFO: Updating DaemonSet daemon-set Mar 10 22:10:16.893: INFO: Roll back the DaemonSet before rollout is complete Mar 10 22:10:16.899: INFO: Updating DaemonSet daemon-set Mar 10 22:10:16.899: INFO: Make sure DaemonSet rollback is complete Mar 10 22:10:16.924: INFO: Wrong image for pod: daemon-set-vqcmk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 10 22:10:16.924: INFO: Pod daemon-set-vqcmk is not available Mar 10 22:10:16.934: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:10:17.939: INFO: Wrong image for pod: daemon-set-vqcmk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 10 22:10:17.939: INFO: Pod daemon-set-vqcmk is not available Mar 10 22:10:17.943: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:10:18.938: INFO: Pod daemon-set-s8s9v is not available Mar 10 22:10:18.942: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3961, will wait for the garbage collector to delete the pods Mar 10 22:10:19.011: INFO: Deleting DaemonSet.extensions daemon-set took: 11.802521ms Mar 10 22:10:19.311: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.201528ms Mar 10 22:10:26.114: INFO: Number of nodes with available pods: 0 Mar 10 22:10:26.114: INFO: Number of running nodes: 0, number of available pods: 0 Mar 10 22:10:26.117: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3961/daemonsets","resourceVersion":"691646"},"items":null} Mar 10 22:10:26.120: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3961/pods","resourceVersion":"691646"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:10:26.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3961" for this suite. • [SLOW TEST:24.388 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":265,"skipped":4392,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:10:26.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-c004a304-792d-4e7e-b6a0-768c3fb60dbd STEP: Creating a pod to test consume secrets Mar 10 22:10:26.244: INFO: Waiting up to 5m0s for pod "pod-secrets-c93efd12-1f16-4448-b1c7-48874e9866c1" in namespace "secrets-1412" to be "success or failure" Mar 10 22:10:26.249: INFO: Pod "pod-secrets-c93efd12-1f16-4448-b1c7-48874e9866c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289221ms Mar 10 22:10:28.252: INFO: Pod "pod-secrets-c93efd12-1f16-4448-b1c7-48874e9866c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007689691s Mar 10 22:10:30.257: INFO: Pod "pod-secrets-c93efd12-1f16-4448-b1c7-48874e9866c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012777235s STEP: Saw pod success Mar 10 22:10:30.257: INFO: Pod "pod-secrets-c93efd12-1f16-4448-b1c7-48874e9866c1" satisfied condition "success or failure" Mar 10 22:10:30.261: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c93efd12-1f16-4448-b1c7-48874e9866c1 container secret-volume-test: STEP: delete the pod Mar 10 22:10:30.288: INFO: Waiting for pod pod-secrets-c93efd12-1f16-4448-b1c7-48874e9866c1 to disappear Mar 10 22:10:30.312: INFO: Pod pod-secrets-c93efd12-1f16-4448-b1c7-48874e9866c1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:10:30.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1412" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4397,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:10:30.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-3d3eecb0-02fd-4f07-b8f2-725d65c1e5d0 STEP: Creating a pod to test consume configMaps Mar 10 22:10:30.425: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d0df803e-3e31-4f26-8ccc-222c21f18176" in namespace "projected-9003" to be "success or failure" Mar 10 22:10:30.430: INFO: Pod "pod-projected-configmaps-d0df803e-3e31-4f26-8ccc-222c21f18176": Phase="Pending", Reason="", readiness=false. Elapsed: 4.607997ms Mar 10 22:10:32.433: INFO: Pod "pod-projected-configmaps-d0df803e-3e31-4f26-8ccc-222c21f18176": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007941899s STEP: Saw pod success Mar 10 22:10:32.433: INFO: Pod "pod-projected-configmaps-d0df803e-3e31-4f26-8ccc-222c21f18176" satisfied condition "success or failure" Mar 10 22:10:32.436: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-d0df803e-3e31-4f26-8ccc-222c21f18176 container projected-configmap-volume-test: STEP: delete the pod Mar 10 22:10:32.489: INFO: Waiting for pod pod-projected-configmaps-d0df803e-3e31-4f26-8ccc-222c21f18176 to disappear Mar 10 22:10:32.497: INFO: Pod pod-projected-configmaps-d0df803e-3e31-4f26-8ccc-222c21f18176 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:10:32.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9003" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4406,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:10:32.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 10 22:10:32.614: INFO: Waiting up to 5m0s for pod "client-containers-98202c04-1fb9-4935-9a49-ef5b68ce1e25" in namespace "containers-8321" to be "success or failure" Mar 10 22:10:32.616: INFO: Pod "client-containers-98202c04-1fb9-4935-9a49-ef5b68ce1e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.419449ms Mar 10 22:10:34.619: INFO: Pod "client-containers-98202c04-1fb9-4935-9a49-ef5b68ce1e25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005123107s STEP: Saw pod success Mar 10 22:10:34.619: INFO: Pod "client-containers-98202c04-1fb9-4935-9a49-ef5b68ce1e25" satisfied condition "success or failure" Mar 10 22:10:34.621: INFO: Trying to get logs from node jerma-worker2 pod client-containers-98202c04-1fb9-4935-9a49-ef5b68ce1e25 container test-container: STEP: delete the pod Mar 10 22:10:34.641: INFO: Waiting for pod client-containers-98202c04-1fb9-4935-9a49-ef5b68ce1e25 to disappear Mar 10 22:10:34.658: INFO: Pod client-containers-98202c04-1fb9-4935-9a49-ef5b68ce1e25 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:10:34.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8321" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4426,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:10:34.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 10 22:10:37.797: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:10:38.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5897" for this suite. •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":269,"skipped":4427,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:10:38.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-9997/secret-test-78581f44-d646-47a1-bdaf-35cbcfc82ba2 STEP: Creating a pod to test consume secrets Mar 10 22:10:38.978: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f2d6646-9d90-47d4-ac45-0f9a6ed08476" in namespace "secrets-9997" to be "success or failure" Mar 10 22:10:39.030: INFO: Pod "pod-configmaps-7f2d6646-9d90-47d4-ac45-0f9a6ed08476": Phase="Pending", Reason="", readiness=false. Elapsed: 51.909022ms Mar 10 22:10:41.034: INFO: Pod "pod-configmaps-7f2d6646-9d90-47d4-ac45-0f9a6ed08476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.056064919s STEP: Saw pod success Mar 10 22:10:41.034: INFO: Pod "pod-configmaps-7f2d6646-9d90-47d4-ac45-0f9a6ed08476" satisfied condition "success or failure" Mar 10 22:10:41.037: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-7f2d6646-9d90-47d4-ac45-0f9a6ed08476 container env-test: STEP: delete the pod Mar 10 22:10:41.083: INFO: Waiting for pod pod-configmaps-7f2d6646-9d90-47d4-ac45-0f9a6ed08476 to disappear Mar 10 22:10:41.088: INFO: Pod pod-configmaps-7f2d6646-9d90-47d4-ac45-0f9a6ed08476 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:10:41.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9997" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4429,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:10:41.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:10:57.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8747" for this suite. • [SLOW TEST:16.236 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":271,"skipped":4432,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:10:57.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 10 22:10:57.401: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:11:12.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2550" for this suite. • [SLOW TEST:14.664 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":272,"skipped":4452,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:11:12.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-486f43fb-3cdb-4afd-b497-ad0a8d6bffb0 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:11:12.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-587" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":273,"skipped":4459,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:11:12.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-db03d09a-4330-46dd-be03-1b45ef127063 STEP: Creating a pod to test consume secrets Mar 10 22:11:12.174: INFO: Waiting up to 5m0s for pod "pod-secrets-02fbcade-d5b7-430d-bba2-c4e37f846a84" in namespace "secrets-1013" to be "success or failure" Mar 10 22:11:12.186: INFO: Pod "pod-secrets-02fbcade-d5b7-430d-bba2-c4e37f846a84": Phase="Pending", Reason="", readiness=false. Elapsed: 12.17972ms Mar 10 22:11:14.195: INFO: Pod "pod-secrets-02fbcade-d5b7-430d-bba2-c4e37f846a84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020796478s STEP: Saw pod success Mar 10 22:11:14.195: INFO: Pod "pod-secrets-02fbcade-d5b7-430d-bba2-c4e37f846a84" satisfied condition "success or failure" Mar 10 22:11:14.197: INFO: Trying to get logs from node jerma-worker pod pod-secrets-02fbcade-d5b7-430d-bba2-c4e37f846a84 container secret-volume-test: STEP: delete the pod Mar 10 22:11:14.230: INFO: Waiting for pod pod-secrets-02fbcade-d5b7-430d-bba2-c4e37f846a84 to disappear Mar 10 22:11:14.234: INFO: Pod pod-secrets-02fbcade-d5b7-430d-bba2-c4e37f846a84 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:11:14.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1013" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:11:14.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 22:11:14.346: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 10 22:11:14.354: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:14.376: INFO: Number of nodes with available pods: 0 Mar 10 22:11:14.376: INFO: Node jerma-worker is running more than one daemon pod Mar 10 22:11:15.388: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:15.390: INFO: Number of nodes with available pods: 0 Mar 10 22:11:15.390: INFO: Node jerma-worker is running more than one daemon pod Mar 10 22:11:16.379: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:16.381: INFO: Number of nodes with available pods: 1 Mar 10 22:11:16.381: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 22:11:17.380: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:17.383: INFO: Number of nodes with available pods: 2 Mar 10 22:11:17.383: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 10 22:11:17.445: INFO: Wrong image for pod: daemon-set-f8xvh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 10 22:11:17.445: INFO: Wrong image for pod: daemon-set-gsmw7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 10 22:11:17.451: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:18.455: INFO: Wrong image for pod: daemon-set-f8xvh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 10 22:11:18.455: INFO: Wrong image for pod: daemon-set-gsmw7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 10 22:11:18.458: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:19.463: INFO: Wrong image for pod: daemon-set-f8xvh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 10 22:11:19.463: INFO: Pod daemon-set-f8xvh is not available Mar 10 22:11:19.463: INFO: Wrong image for pod: daemon-set-gsmw7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 10 22:11:19.466: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:20.483: INFO: Wrong image for pod: daemon-set-gsmw7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 10 22:11:20.483: INFO: Pod daemon-set-mck4d is not available Mar 10 22:11:20.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:21.455: INFO: Wrong image for pod: daemon-set-gsmw7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 10 22:11:21.455: INFO: Pod daemon-set-mck4d is not available Mar 10 22:11:21.459: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:22.467: INFO: Wrong image for pod: daemon-set-gsmw7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 10 22:11:22.467: INFO: Pod daemon-set-mck4d is not available Mar 10 22:11:22.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:23.455: INFO: Wrong image for pod: daemon-set-gsmw7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 10 22:11:23.455: INFO: Pod daemon-set-gsmw7 is not available Mar 10 22:11:23.457: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:24.455: INFO: Pod daemon-set-l9mpd is not available Mar 10 22:11:24.458: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 10 22:11:24.460: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:24.462: INFO: Number of nodes with available pods: 1 Mar 10 22:11:24.462: INFO: Node jerma-worker is running more than one daemon pod Mar 10 22:11:25.467: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:25.469: INFO: Number of nodes with available pods: 1 Mar 10 22:11:25.469: INFO: Node jerma-worker is running more than one daemon pod Mar 10 22:11:26.467: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 22:11:26.470: INFO: Number of nodes with available pods: 2 Mar 10 22:11:26.470: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5748, will wait for the garbage collector to delete the pods Mar 10 22:11:26.542: INFO: Deleting DaemonSet.extensions daemon-set took: 5.689723ms Mar 10 22:11:26.842: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.225061ms Mar 10 22:11:30.248: INFO: Number of nodes with available pods: 0 Mar 10 22:11:30.248: INFO: Number of running nodes: 0, number of available pods: 0 Mar 10 22:11:30.251: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5748/daemonsets","resourceVersion":"692203"},"items":null} Mar 10 22:11:30.252: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5748/pods","resourceVersion":"692203"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:11:30.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5748" for this suite. • [SLOW TEST:16.028 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":275,"skipped":4499,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:11:30.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-afd52a2f-75c2-44e1-bd8b-61ebcd00c76f STEP: Creating a pod to test consume configMaps Mar 10 22:11:30.429: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ec472291-82a6-43f8-beae-884af9f6e002" in namespace "projected-9456" to be "success or failure" Mar 10 22:11:30.431: INFO: Pod "pod-projected-configmaps-ec472291-82a6-43f8-beae-884af9f6e002": Phase="Pending", Reason="", readiness=false. Elapsed: 1.968659ms Mar 10 22:11:32.450: INFO: Pod "pod-projected-configmaps-ec472291-82a6-43f8-beae-884af9f6e002": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021238708s STEP: Saw pod success Mar 10 22:11:32.450: INFO: Pod "pod-projected-configmaps-ec472291-82a6-43f8-beae-884af9f6e002" satisfied condition "success or failure" Mar 10 22:11:32.452: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-ec472291-82a6-43f8-beae-884af9f6e002 container projected-configmap-volume-test: STEP: delete the pod Mar 10 22:11:32.470: INFO: Waiting for pod pod-projected-configmaps-ec472291-82a6-43f8-beae-884af9f6e002 to disappear Mar 10 22:11:32.474: INFO: Pod pod-projected-configmaps-ec472291-82a6-43f8-beae-884af9f6e002 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:11:32.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9456" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4501,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:11:32.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:11:33.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4752" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":277,"skipped":4522,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 22:11:33.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 22:11:33.518: INFO: Creating deployment "webserver-deployment" Mar 10 22:11:33.528: INFO: Waiting for observed generation 1 Mar 10 22:11:35.563: INFO: Waiting for all required pods to come up Mar 10 22:11:35.588: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 10 22:11:39.615: INFO: Waiting for deployment "webserver-deployment" to complete Mar 10 22:11:39.620: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 10 22:11:39.628: INFO: Updating deployment webserver-deployment Mar 10 22:11:39.628: INFO: Waiting for observed generation 2 Mar 10 22:11:41.692: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 10 22:11:41.708: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 10 22:11:41.720: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 10 22:11:41.727: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 10 22:11:41.727: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 10 22:11:41.730: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 10 22:11:41.739: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 10 22:11:41.739: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 10 22:11:41.745: INFO: Updating deployment webserver-deployment Mar 10 22:11:41.745: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 10 22:11:41.768: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 10 22:11:41.774: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 10 22:11:41.849: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-372 /apis/apps/v1/namespaces/deployment-372/deployments/webserver-deployment 3ab14e73-2672-4e29-81a8-f4e63c7e8add 692506 3 2020-03-10 22:11:33 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002736508 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-10 22:11:40 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-10 22:11:41 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 10 22:11:41.911: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-372 /apis/apps/v1/namespaces/deployment-372/replicasets/webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 692491 3 2020-03-10 22:11:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 3ab14e73-2672-4e29-81a8-f4e63c7e8add 0xc002736b07 0xc002736b08}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002736b98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 10 22:11:41.911: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 10 22:11:41.911: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-372 /apis/apps/v1/namespaces/deployment-372/replicasets/webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 692532 3 2020-03-10 22:11:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 3ab14e73-2672-4e29-81a8-f4e63c7e8add 0xc0027369d7 0xc0027369d8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002736a58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 10 22:11:42.012: INFO: Pod "webserver-deployment-595b5b9587-4rgcj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4rgcj webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-4rgcj e244098c-f68b-4393-bb8f-cbea0bb4dbbc 692543 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc002737197 0xc002737198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.013: INFO: Pod "webserver-deployment-595b5b9587-4tmzw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4tmzw webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-4tmzw 7d07902c-97c0-426c-b03f-9a32621f1407 692510 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0027372b0 0xc0027372b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.013: INFO: Pod "webserver-deployment-595b5b9587-69cvf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-69cvf webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-69cvf e676f0dd-cd98-4dba-8ab9-d9e4cf88b9fc 692500 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0027373c0 0xc0027373c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.014: INFO: Pod "webserver-deployment-595b5b9587-6l264" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6l264 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-6l264 69669c3d-190d-46ab-89d0-960eb7810f81 692526 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0027374d0 0xc0027374d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.014: INFO: Pod "webserver-deployment-595b5b9587-74tb8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-74tb8 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-74tb8 affc1bed-9904-4c6f-ae7c-e45985b9ff97 692390 0 2020-03-10 22:11:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0027375e0 0xc0027375e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.91,StartTime:2020-03-10 22:11:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 22:11:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b74e051478f06833b3a50549df16b1c86880c69a79592658eba279e6d3858928,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.014: INFO: Pod "webserver-deployment-595b5b9587-9s22z" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9s22z webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-9s22z 9f0f3bff-6806-4a65-afc8-0f1067283ea6 692544 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc002737750 0xc002737751}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.015: INFO: Pod "webserver-deployment-595b5b9587-bflw6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bflw6 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-bflw6 c9cd4e17-babe-48a3-ab8b-bfa7de3a295a 692393 0 2020-03-10 22:11:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc002737860 0xc002737861}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.114,StartTime:2020-03-10 22:11:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 22:11:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7c2b8108e613077c5f41ad132364f9776f93a500bdd2b247e43338dc1bd6a5ef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.015: INFO: Pod "webserver-deployment-595b5b9587-bqhd9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bqhd9 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-bqhd9 e4325cc8-0147-40f4-a89f-58eba82fecc7 692379 0 2020-03-10 22:11:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc002737a10 0xc002737a11}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.117,StartTime:2020-03-10 22:11:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 22:11:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b6b673a914b57055fb1bb203d93f57091bef42df23f2b1b840a8fe3a8c40777e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.117,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.016: INFO: Pod "webserver-deployment-595b5b9587-d8jlr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d8jlr webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-d8jlr 54ebbaf4-32b8-4343-8b9b-acde98a6d88a 692389 0 2020-03-10 22:11:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc002737be0 0xc002737be1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.116,StartTime:2020-03-10 22:11:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 22:11:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ff9138fb56b6fdd51818ed1f5727a6b26c07c8193b4d5c39eb3c7a1bb387cba2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.116,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.016: INFO: Pod "webserver-deployment-595b5b9587-dtfmf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dtfmf webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-dtfmf 9f47fc9d-42dd-4bd6-bd05-e67616b80425 692539 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc002737da0 0xc002737da1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.016: INFO: Pod "webserver-deployment-595b5b9587-jpmrc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jpmrc webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-jpmrc f16ebd69-197b-49aa-940d-fc6a1d234ea6 692385 0 2020-03-10 22:11:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc002737ee0 0xc002737ee1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.92,StartTime:2020-03-10 22:11:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 22:11:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bd5d2ebae3f9c3a37d7b5592e74838e515774cb68d3103429f8ba05467cd3902,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.017: INFO: Pod "webserver-deployment-595b5b9587-p7pmd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-p7pmd webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-p7pmd a44ee43f-4d42-4ae2-bdd5-47b35c96a64c 692396 0 2020-03-10 22:11:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0026ae0f0 0xc0026ae0f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.113,StartTime:2020-03-10 22:11:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 22:11:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ab4dba7f5e0da29df8289ace71e818962cfa1f2ec0d93762296360bf46ed02ba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.113,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.017: INFO: Pod "webserver-deployment-595b5b9587-pzvl8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pzvl8 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-pzvl8 9a2f7749-16af-4b04-bda1-e9f79ddf79c8 692504 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0026ae290 0xc0026ae291}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.017: INFO: Pod "webserver-deployment-595b5b9587-q4792" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q4792 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-q4792 0e71bc62-fe98-45ba-9c14-ef21f36c43f4 692524 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0026ae3c0 0xc0026ae3c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.018: INFO: Pod "webserver-deployment-595b5b9587-r4t9q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-r4t9q webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-r4t9q 85c1d5b1-c809-42b9-9d37-3a240573a139 692545 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0026ae550 0xc0026ae551}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-10 22:11:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.018: INFO: Pod "webserver-deployment-595b5b9587-v87lf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v87lf webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-v87lf e2ec8156-3e6b-4946-92ed-d33881edb70c 692371 0 2020-03-10 22:11:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0026ae700 0xc0026ae701}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.93,StartTime:2020-03-10 22:11:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 22:11:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f208ec9a2cc1c7f2506c4279f5518cc59600b692b28544fb1a46bffaf962f9c9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.018: INFO: Pod "webserver-deployment-595b5b9587-vgmkk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vgmkk webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-vgmkk 3391f217-f28f-4fd9-8f11-7f891f18df98 692540 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0026ae8b0 0xc0026ae8b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.019: INFO: Pod "webserver-deployment-595b5b9587-wvsk8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wvsk8 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-wvsk8 c6daaa58-b632-4ffa-9193-5ccf685a955f 692541 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0026ae9f0 0xc0026ae9f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.019: INFO: Pod "webserver-deployment-595b5b9587-wwb99" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wwb99 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-wwb99 d9873826-e06a-4622-9796-0fa3c712392b 692384 0 2020-03-10 22:11:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0026aeb30 0xc0026aeb31}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.115,StartTime:2020-03-10 22:11:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-10 22:11:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2518fc015ba424e95223b520fd6435c4d3c5d24a263f2915be94c43f1d5082f0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.019: INFO: Pod "webserver-deployment-595b5b9587-z4r9m" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z4r9m webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-z4r9m 4bd03e1a-c883-47ef-bccd-c9c5ca3b10da 692527 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 90115ada-323a-4ee3-9968-519a60edb3fa 0xc0026aecd0 0xc0026aecd1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.020: INFO: Pod "webserver-deployment-c7997dcc8-2b92k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2b92k webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-2b92k 82c7919a-7e13-48c6-9b20-20eec9da75a7 692468 0 2020-03-10 22:11:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026aede0 0xc0026aede1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-10 22:11:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.020: INFO: Pod "webserver-deployment-c7997dcc8-4wvdc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4wvdc webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-4wvdc 5789c0e2-b7c7-4461-840b-9979488cfe9d 692559 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026aef90 0xc0026aef91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-10 22:11:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.020: INFO: Pod "webserver-deployment-c7997dcc8-5jwpp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5jwpp webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-5jwpp f98a706c-43c6-4f26-98c2-4aef7c827914 692529 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026af120 0xc0026af121}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.021: INFO: Pod "webserver-deployment-c7997dcc8-bw4cd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bw4cd webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-bw4cd 613c694c-461f-449e-acb2-d2254708259d 692443 0 2020-03-10 22:11:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026af250 0xc0026af251}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-10 22:11:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.021: INFO: Pod "webserver-deployment-c7997dcc8-d4cr7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d4cr7 webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-d4cr7 ddd86815-1691-4282-98b2-3beeb0b2a024 692515 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026af3d0 0xc0026af3d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.021: INFO: Pod "webserver-deployment-c7997dcc8-f8x6c" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f8x6c webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-f8x6c fe2e947d-35f6-45d9-814d-45803908cb86 692549 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026af4f0 0xc0026af4f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.021: INFO: Pod "webserver-deployment-c7997dcc8-gkmd2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gkmd2 webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-gkmd2 ee4b31f3-c54f-4781-bc57-8f3a4e75e214 692467 0 2020-03-10 22:11:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026af650 0xc0026af651}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-10 22:11:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.022: INFO: Pod "webserver-deployment-c7997dcc8-hhvmn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hhvmn webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-hhvmn ba8057b9-e0b3-4d00-b3ec-1e38dcb1ce68 692447 0 2020-03-10 22:11:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026af850 0xc0026af851}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-10 22:11:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.022: INFO: Pod "webserver-deployment-c7997dcc8-k8nks" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k8nks webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-k8nks e6f5fea9-516d-4301-8c03-0e465d50ad11 692473 0 2020-03-10 22:11:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026afa20 0xc0026afa21}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-10 22:11:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.022: INFO: Pod "webserver-deployment-c7997dcc8-npdrn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-npdrn webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-npdrn 1e2a9918-e572-4bd8-92ff-edee2f331b55 692542 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026afba0 0xc0026afba1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.023: INFO: Pod "webserver-deployment-c7997dcc8-v2nms" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v2nms webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-v2nms 01f76a15-b7c3-4ca0-94a2-8ec17dde7288 692536 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026afd00 0xc0026afd01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.023: INFO: Pod "webserver-deployment-c7997dcc8-v8z4d" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v8z4d webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-v8z4d ecd3f94c-7784-426b-8234-2b4f6ac6b2ea 692538 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026afe40 0xc0026afe41}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 10 22:11:42.023: INFO: Pod "webserver-deployment-c7997dcc8-zg5nk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zg5nk webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-zg5nk a24acadc-00f4-4d35-ae2e-c9ca8ea1338f 692516 0 2020-03-10 22:11:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 346fb6fe-bdbc-4c40-aefc-fc1fbd1ec5e3 0xc0026aff70 0xc0026aff71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgc92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgc92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-10 22:11:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 22:11:42.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-372" for this suite. • [SLOW TEST:8.737 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":278,"skipped":4528,"failed":0} SSSSSSSSMar 10 22:11:42.177: INFO: Running AfterSuite actions on all nodes Mar 10 22:11:42.177: INFO: Running AfterSuite actions on node 1 Mar 10 22:11:42.177: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4536,"failed":0} Ran 278 of 4814 Specs in 3829.948 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4536 Skipped PASS